query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
17
negative_passages
listlengths
9
100
subset
stringclasses
7 values
019a68528da8cde23f9e1e07395d6dc8
Social Networks and the Diffusion of User-Generated Content: Evidence from YouTube
[ { "docid": "4253afeaeb2f238339611e5737ed3e06", "text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.", "title": "" } ]
[ { "docid": "741aefcfa90a6a4ddc08ea293f13ec88", "text": "The Timeline Followback (TLFB), a retrospective calendar-based measure of daily substance use, was initially developed to obtain self-reports of alcohol use. Since its inception it has undergone extensive evaluation across diverse populations and is considered the most psychometrically sound self-report measure of drinking. Although the TLFB has been extended to other behaviors, its psychometric evaluation with other addictive behaviors has not been as extensive as for alcohol use. The present study evaluated the test-retest reliability of the TLFB for cocaine, cannabis, and cigarette use for participants recruited from outpatient alcohol and drug treatment programs and the general community across intervals ranging from 30 to 360 days prior to the interview. The dependent measure for cigarette smokers and cannabis users was daily use of cigarettes and joints, respectively, and for cocaine users it was a \"Yes\" or \"No\" regarding cocaine use for each day. The TLFB was administered in different formats for different drug types. Different interviewers conducted the two interviews. The TLFB collected highly reliable information about participants' daily use of cocaine, cannabis, and cigarettes from 30, 90, to 360 days prior to the interview. Findings from this study not only suggest that shorter time intervals (e.g., 90 days) can be used with little loss of accuracy, but also add to the growing literature that the TLFB can be used with confidence to collect psychometrically sound information about substance use (i.e., cocaine, cannabis, cigarettes) other than alcohol in treatment- and nontreatment-seeking populations for intervals from ranging up to 12 months prior to the interview.", "title": "" }, { "docid": "3bbd0b00fd4d9e2c7e1e7b4bd57d6352", "text": "We propose a new attention model for video question answering. The main idea of the attention models is to locate on the most informative parts of the visual data. The attention mechanisms are quite popular these days. However, most existing visual attention mechanisms regard the question as a whole. They ignore the word-level semantics where each word can have different attentions and some words need no attention. Neither do they consider the semantic structure of the sentences. Although the extended soft attention model for video question answering leverages the word-level attention, it performs poorly on long question sentences. In this paper, we propose the heterogeneous tree-structured memory network (HTreeMN) for video question answering. Our proposed approach is based upon the syntax parse trees of the question sentences. The HTreeMN treats the words differently where the visual words are processed with an attention module and the verbal ones not. It also utilizes the semantic structure of the sentences by combining the neighbors based on the recursive structure of the parse trees. The understandings of the words and the videos are propagated and merged from leaves to the root. Furthermore, we build a hierarchical attention mechanism to distill the attended features. We evaluate our approach on two data sets. The experimental results show the superiority of our HTreeMN model over the other attention models, especially on complex questions.", "title": "" }, { "docid": "56ff8aa7934ed264908f42025d4c175b", "text": "The identification of design patterns as part of the reengineering process can convey important information to the designer. However, existing pattern detection methodologies generally have problems in dealing with one or more of the following issues: identification of modified pattern versions, search space explosion for large systems and extensibility to novel patterns. In this paper, a design pattern detection methodology is proposed that is based on similarity scoring between graph vertices. Due to the nature of the underlying graph algorithm, this approach has the ability to also recognize patterns that are modified from their standard representation. Moreover, the approach exploits the fact that patterns reside in one or more inheritance hierarchies, reducing the size of the graphs to which the algorithm is applied. Finally, the algorithm does not rely on any pattern-specific heuristic, facilitating the extension to novel design structures. Evaluation on three open-source projects demonstrated the accuracy and the efficiency of the proposed method", "title": "" }, { "docid": "a475b9a4e9e7c948204cdfc2e7921f8a", "text": "; Numerous studies have provided supportive evidence for the efficacy of exposure-based treatments for many psychological disorders. However, surprisingly few therapists use exposure therapy in the clinical setting. Although the limited use of exposure-based treatments may be partially attributable to a shortage of suitably trained therapists, exposure therapy also suffers from a “public relations problem” predicated upon concerns that it is cruel and at odds with some ethical considerations (e.g., first do no harm). This article provides an overview of ethical issues and considerations relevant to the use of exposure therapy. It is argued that the degree to which ethical issues become problematic in implementing exposure-based treatments is largely dependent upon the therapist's ability to create an adequately safe and professional context. Specific strategies that may be employed for avoiding potential ethical conflicts in the use of exposure-based treatments are discussed.", "title": "" }, { "docid": "a1b7f477c339f30587a2f767327b4b41", "text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.", "title": "" }, { "docid": "535adb02a713d96b076c9260239cd207", "text": "The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/.", "title": "" }, { "docid": "920545a998275ae81300a569d764a5e6", "text": "The housing market setting constitutes a fundamental model of exchange economies of goods. In most of the work concerning housing markets, it is assumed that agents own and are allocated discrete houses. The drawback of this assumption is that it does not cater for randomized assignments or allocation of time-shares. Recently, house allocation with fractional endowment of houses was considered by Athanassoglou and Sethuraman (2011) who posed the open problem of generalizing Gale’s Top Trading Cycles (TTC) algorithm to the case of housing markets with fractional endowments. In this paper, we address the problem and present a generalization of TTC called FTTC that is polynomialtime as well as core stable and Pareto optimal with respect to stochastic dominance even if there are indifferences in the preferences. We prove that if each agent owns one discrete house, FTTC coincides with a state of the art strategyproof mechanism for housing markets with discrete endowments and weak preferences. We show that FTTC satisfies a maximal set of desirable properties by proving two impossibility theorems. Firstly, we prove that with respect to stochastic dominance, core stability and no justified envy are incompatible. Secondly, we prove that there exists no individual rational, Pareto optimal and weak strategyproof mechanism, thereby answering another open problem posed by Athanassoglou and Sethuraman (2011). The second impossibility implies a number of results in the literature.", "title": "" }, { "docid": "1c20908b24c78b43a858ba154165b544", "text": "The implementation of concentrated windings in interior permanent magnet (IPM) machines has numerous advantages over distributed windings, with the disadvantage being mainly the decrease in saliency ratio. This paper presents a proposed finite element (FE) method in which the d- and q-axis inductances (Ld and Lq) of the IPM machine with fractional-slot concentrated windings can be accurately determined. This method is used to determine Ld and Lq of various winding configurations and to determine the optimum saliency ratio for a 12-slot 14-pole model with fractional-slot concentrated windings. FE testing were carried out by the use of Flux2D.", "title": "" }, { "docid": "b703e12e357acf852df2da2990922d71", "text": "People often fail to notice unexpected objects and events when they are focusing attention on something else. Most studies of this \"inattentional blindness\" use unexpected objects that are irrelevant to the primary task and to the participant (e.g., gorillas in basketball games or colored shapes in computerized tracking tasks). Although a few studies have examined noticing rates for personally relevant or task-relevant unexpected objects, few have done so in a real-world context with objects that represent a direct threat to the participant. In this study, police academy trainees (n = 100) and experienced police officers (n = 75) engaged in a simulated vehicle traffic stop in which they approached a vehicle to issue a warning or citation for running a stop sign. The driver was either passive and cooperative or agitated and hostile when complying with the officer's instructions. Overall, 58% of the trainees and 33% of the officers failed to notice a gun positioned in full view on the passenger dashboard. The driver's style of interaction had little effect on noticing rates for either group. People can experience inattentional blindness for a potentially dangerous object in a naturalistic real-world context, even when noticing that object would change how they perform their primary task and even when their training focuses on awareness of potential threats.", "title": "" }, { "docid": "760edd83045a80dbb2231c0ffbef2ea7", "text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.", "title": "" }, { "docid": "cd587b4f35290bf779b0c7ee0214ab72", "text": "Time series data is perhaps the most frequently encountered type of data examined by the data mining community. Clustering is perhaps the most frequently used data mining algorithm, being useful in it's own right as an exploratory technique, and also as a subroutine in more complex data mining algorithms such as rule discovery, indexing, summarization, anomaly detection, and classification. Given these two facts, it is hardly surprising that time series clustering has attracted much attention. The data to be clustered can be in one of two formats: many individual time series, or a single time series, from which individual time series are extracted with a sliding window. Given the recent explosion of interest in streaming data and online algorithms, the latter case has received much attention.In this work we make a surprising claim. Clustering of streaming time series is completely meaningless. More concretely, clusters extracted from streaming time series are forced to obey a certain constraint that is pathologically unlikely to be satisfied by any dataset, and because of this, the clusters extracted by any clustering algorithm are essentially random. While this constraint can be intuitively demonstrated with a simple illustration and is simple to prove, it has never appeared in the literature.We can justify calling our claim surprising, since it invalidates the contribution of dozens of previously published papers. We will justify our claim with a theorem, illustrative examples, and a comprehensive set of experiments on reimplementations of previous work. Although the primary contribution of our work is to draw attention to the fact that an apparent solution to an important problem is incorrect and should no longer be used, we also introduce a novel method which, based on the concept of time series motifs, is able to meaningfully cluster some streaming time series datasets.", "title": "" }, { "docid": "c20c8cda27cd9045e1265458a2ff0b88", "text": "Storing and sharing of medical data in the cloud environment, where computing resources including storage is provided by a third party service provider, raise serious concern of individual privacy for the adoption of cloud computing technologies. Existing privacy protection researches can be classified into three categories, i.e., privacy by policy, privacy by statistics, and privacy by cryptography. However, the privacy concerns and data utilization requirements on different parts of the medical data may be quite different. The solution for medical dataset sharing in the cloud should support multiple data accessing paradigms with different privacy strengths. The statistics or cryptography technology a multiple privacy demands, which blocks their application in the real-world cloud. This paper proposes a practical solution for privacy preserving medical record sharing for cloud computing. Based on the classification of the attributes of medical records, we use vertical partition of medical dataset to achieve the consideration of different parts of medical data with different privacy concerns. It mainly includes four components, i.e., (1) vertical data partition for medical data publishing, (2) data merging for mecial dataset accessing, (3) integrity checking, and (4) hybrid search across plaintext and ciphertext, where the statistical analysis and cryptography are innovatively combined together to provide multiple paradigms of balance between medical data utilization and privacy protection. A prototype system for the large scale medical data access and sharing is implemented. Extensive experiments show the effectiveness of our proposed solution. K eywords: privacy protection, cloud storage, integrity check, medical data sharing.", "title": "" }, { "docid": "02cd879a83070af9842999c7215e7f92", "text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.", "title": "" }, { "docid": "ba5d0acb79bcd3fd1ffdb85ed345badc", "text": "Although the Transformer translation model (Vaswani et al., 2017) has achieved state-ofthe-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge. In this work, we extend the Transformer model with a new context encoder to represent document-level context, which is then incorporated into the original encoder and decoder. As large-scale document-level parallel corpora are usually not available, we introduce a two-step training method to take full advantage of abundant sentence-level parallel corpora and limited document-level parallel corpora. Experiments on the NIST ChineseEnglish datasets and the IWSLT FrenchEnglish datasets show that our approach improves over Transformer significantly. 1", "title": "" }, { "docid": "26c003f70bbaade54b84dcb48d2a08c9", "text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.", "title": "" }, { "docid": "9f120418c7f3ac22ff3781f94fa7d6e1", "text": "This paper explores different ways to render world-wide geographic maps in virtual reality (VR). We compare: (a) a 3D exocentric globe, where the user’s viewpoint is outside the globe; (b) a flat map (rendered to a plane in VR); (c) an egocentric 3D globe, with the viewpoint inside the globe; and (d) a curved map, created by projecting the map onto a section of a sphere which curves around the user. In all four visualisations the geographic centre can be smoothly adjusted with a standard handheld VR controller and the user, through a head-tracked headset, can physically move around the visualisation. For distance comparison exocentric globe is more accurate than egocentric globe and flat map. For area comparison more time is required with exocentric and egocentric globes than with flat and curved maps. For direction estimation, the exocentric globe is more accurate and faster than the other visual presentations. Our study participants had a weak preference for the exocentric globe. Generally the curved map had benefits over the flat map. In almost all cases the egocentric globe was found to be the least effective visualisation. Overall, our results provide support for the use of exocentric globes for geographic visualisation in mixed-reality. CCS Concepts •Human-centered computing → Virtual reality; Geographic visualization; Empirical studies in HCI;", "title": "" }, { "docid": "dd32879d2b030aa4853f635504afdd98", "text": "A recent addition to Microsoft's Xbox Live Marketplace is a recommender system which allows users to explore both movies and games in a personalized context. The system largely relies on implicit feedback, and runs on a large scale, serving tens of millions of daily users. We describe the system design, and review the core recommendation algorithm.", "title": "" }, { "docid": "35b4f36d9d0249fb33d06cb7971d7bfd", "text": "In this paper, we introduce the completed local binary patterns (CLBP) operator for the first time on remote sensing land-use scene classification. To further improve the representation power of CLBP, we propose a multi-scale CLBP (MS-CLBP) descriptor to characterize the dominant texture features in multiple resolutions. Two different kinds of implementations of MS-CLBP equipped with the kernel-based extreme learning machine are investigated and compared in terms of classification accuracy and computational complexity. The proposed approach is extensively tested on the 21-class land-use dataset and the 19-class satellite scene dataset showing a consistent increase on performance when compared to the state of the arts.", "title": "" }, { "docid": "146185b62f79a684ed72940a01190ac7", "text": "Nearing 30 years since its introduction, 3D printing technology is set to revolutionize research and teaching laboratories. This feature encompasses the history of 3D printing, reviews various printing methods, and presents current applications. The authors offer an appraisal of the future direction and impact this technology will have on laboratory settings as 3D printers become more accessible.", "title": "" }, { "docid": "2fe11bee56ecafabeb24c69aae63f8cb", "text": "Enabled by virtualization technologies, various multi-tier applications (such as web applications) are hosted by virtual machines (VMs) in cloud data centers. Live migration of multi-tier applications across geographically distributed data centers is important for load management, power saving, routine server maintenance and quality-of-service. Different from a single-VM migration, VMs in a multi-tier application are closely correlated, which results in a correlated VM migrations problem. Current live migration algorithms for single-VM cause significant application performance degradation because intermediate data exchange between different VMs suffers relatively low bandwidth and high latency across distributed data centers. In this paper, we design and implement a coordination system called VMbuddies for correlated VM migrations in the cloud. Particularly, we propose an adaptive network bandwidth allocation algorithm to minimize the migration cost in terms of migration completion time, network traffic and migration downtime. Experiments using a public benchmark show that VMbuddies significantly reduces the performance degradation and migration cost of multi-tier applications.", "title": "" } ]
scidocsrr
006a24601fc766d2a79ed9a37ee4314e
Low quality fingerprint image enhancement based on Gabor filter
[ { "docid": "99c1ad04419fa0028724a26e757b1b90", "text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.", "title": "" } ]
[ { "docid": "5d79d7e9498d7d41fbc7c70d94e6a9ae", "text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.", "title": "" }, { "docid": "4f8f34e72fffd80c4ed18b1f5923eca2", "text": "For pharmaceutical distribution companies it is essential to obtain good estimates of medicine needs, due to the short shelf life of many medicines and the need to control stock levels, so as to avoid excessive inventory costs while guaranteeing customer demand satisfaction, and thus decreasing the possibility of loss of customers due to stock outages. In this paper we explore the use of the time series data mining technique for the sales prediction of individual products of a pharmaceutical distribution company in Portugal. Through data mining techniques, the historical data of product sales are analyzed to detect patterns and make predictions based on the experience contained in the data. The results obtained with the technique as well as with the proposed method suggest that the performed modelling may be considered appropriate for the short term product sales prediction.", "title": "" }, { "docid": "23e04fc18c03b69b3f5cf44e4be8b925", "text": "A land cover classification service is introduced toward addressing current challenges on the handling and online processing of big remote sensing data. The geospatial web service has been designed, developed, and evaluated toward the efficient and automated classification of satellite imagery and the production of high-resolution land cover maps. The core of our platform consists of the Rasdaman array database management system for raster data storage and the open geospatial consortium web coverage processing service for data querying. Currently, the system is fully covering Greece with Landsat 8 multispectral imagery, from the beginning of its operational orbit. Datasets are stored and preprocessed automatically. A two-stage automated classification procedure was developed which is based on a statistical learning model and a multiclass support vector machine classifier, integrating advanced remote sensing and computer vision tools like Orfeo Toolbox and OpenCV. The framework has been trained to classify pansharpened images at 15-m ground resolution toward the initial detection of 31 spectral classes. The final product of our system is delivering, after a postclassification and merging procedure, multitemporal land cover maps with 10 land cover classes. The performed intensive quantitative evaluation has indicated an overall classification accuracy above 80%. The system in its current alpha release, once receiving a request from the client, can process and deliver land cover maps, for a 500-$\\text{km}^2$ region, in about 20 s, allowing near real-time applications.", "title": "" }, { "docid": "030f0d829b79593f375c97f9bbb1ee8a", "text": "The growing concern about the extent of joblessness in advanced Western economies is fuelled by the perception that the social costs of unemployment substantially exceed the costs of an economy operating below its potential. Rather, it is suspected that unemployment imposes an additional burden on the individual, a burden that might be referred to as the non-pecuniary cost of unemployment. Those costs arise primarily since employment is not only a source of income but also a provider of social relationships, identity in society and individual self-esteem. Darity and Goldsmith (1996) provide a summary of the psychological literature on the link between loss of employment and reduced wellbeing. Substantial efforts have been made in the past to quantify these nonpecuniary costs of unemployment. (See Junankar 1987; Björklund and Eriksson 1995 and Darity and Goldsmith 1996 for surveys of previous empirical studies.) To begin with, one can think of costs directly in terms of decreased psychological wellbeing. Beyond that, decreased wellbeing may express itself through adverse individual outcomes such as increased mortality, suicide risk and crime rates, or decreased marital stability. These possibilities have been explored by previous research. The general finding is that unemployment is associated with substantial negative non-pecuniary effects (see e.g. Jensen and Smith 1990; Junankar 1991). The case seems particularly strong for the direct negative association between unemployment and psychological wellbeing. For instance, Clark and Oswald (1994), using the first wave of the British Household Panel Survey, report estimates from ordered probit models in which a mental distress score is regressed on a set of individual characteristics, unemployment being one of them. They find that the effect of unemployment is both statistically significant and large: being unemployed increases mental distress by more than does suffering impaired health. Other researchers have used different measures of psychological wellbeing and yet obtained the same basic result, a large negative effect of unemployment on well being. Björklund (1985) and Korpi (1997) construct wellbeing indicators from symptoms of sleeplessness, stomach pain, depression and the like, while Goldsmith et al. (1995, 1996) measure", "title": "" }, { "docid": "6610f89ba1776501d6c0d789703deb4e", "text": "REVIEW QUESTION/OBJECTIVE\nThe objective of this review is to identify the effectiveness of mindfulness based programs in reducing stress experienced by nurses in adult hospitalized patient care settings.\n\n\nBACKGROUND\nNursing professionals face extraordinary stressors in the medical environment. Many of these stressors have always been inherent to the profession: long work hours, dealing with pain, loss and emotional suffering, caring for dying patients and providing support to families. Recently nurses have been experiencing increased stress related to other factors such as staffing shortages, increasingly complex patients, corporate financial constraints and the increased need for knowledge of ever-changing technology. Stress affects high-level cognitive functions, specifically attention and memory, and this increases the already high stakes for nurses. Nurses are required to cope with very difficult situations that require accurate, timely decisions that affect human lives on a daily basis.Lapses in attention increase the risk of serious consequences such as medication errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Research has also shown that the stress inherent to health care occupations can lead to depression, reduced job satisfaction, psychological distress and disruptions to personal relationships. These outcomes of stress are factors that create scenarios for risk of patient harm.There are three main effects of stress on nurses: burnout, depression and lateral violence. Burnout has been defined as a syndrome of depersonalization, emotional exhaustion, and a sense of low personal accomplishment, and the occurrence of burnout has been closely linked to perceived stress. Shimizu, Mizoue, Mishima and Nagata state that nurses experience considerable job stress which has been a major factor in the high rates of burnout that has been recorded among nurses. Zangaro and Soeken share this opinion and state that work related stress is largely contributing to the current nursing shortage. They report that work stress leads to a much higher turnover, especially during the first year after graduation, lowering retention rates in general.In a study conducted in Pennsylvania, researchers found that while 43% of the nurses who reported high levels of burnout indicated their intent to leave their current position, only 11% of nurses who were not burned out intended to leave in the following 12 months. In the same study patient-to-nurse ratios were significantly associated with emotional exhaustion and burnout. An increase of one patient per nurse assignment to a hospital's staffing level increased burnout by 23%.Depression can be defined as a mood disorder that causes a persistent feeling of sadness and loss of interest. Wang found that high levels of work stress were associated with higher risk of mood and anxiety disorders. In Canada one out of every 10 nurses have shown depressive symptoms; compared to the average of 5.1% of the nurses' counterparts who do not work in healthcare. High incidences of depression and depressive symptoms were also reported in studies among Chinese nurses (38%) and Taiwanese nurses (27.7%). In the Taiwanese study the occurrence of depression was significantly and positively correlated to job stress experienced by the nurses (p<0.001).In a multivariate logistic regression, Ohler, Kerr and Forbes also found that job stress was significantly correlated to depression in nurses. The researchers reported that nurses who experienced a higher degree of job stress were 80% more likely to have suffered a major depressive episode in the previous year. A further finding in this study revealed that 75% of the participants also suffered from at least one chronic disease revealing a strong association between depression and other major health issues.A stressful working environment, such as a hospital, could potentially lead to lateral violence among nurses. Lateral violence is a serious occupational health concern among nurses as evidenced by extensive research and literature available on the topic. The impact of lateral violence has been well studied and documented over the past three decades. Griffin and Clark state that lateral violence is a form of bullying grounded in the theoretical framework of the oppression theory. The bullying behaviors occur among members of an oppressed group as a result of feeling powerless and having a perceived lack of control in their workplace. Griffin identified the ten most common forms of lateral violence among nurses as \"non-verbal innuendo, verbal affront, undermining activities, withholding information, sabotage, infighting, scape-goating, backstabbing, failure to respect privacy, and broken confidences\". Nurse-to-nurse lateral violence leads to negative workplace relationships and disrupts team performance, creating an environment where poor patient outcomes, burnout and high staff turnover rates are prevalent.Work-related stressors have been indicated as a potential cause of lateral violence. According to the Effort Reward Imbalance model (ERI) developed by Siegrist, work stress develops when an imbalance exists between the effort individuals put into their jobs and the rewards they receive in return. The ERI model has been widely used in occupational health settings based on its predictive power for adverse health and well-being outcomes. The model claims that both high efforts with low rewards could lead to negative emotions in the exposed employees. Vegchel, van Jonge, de Bosma & Schaufeli state that, according to the ERI model, occupational rewards mostly consist of money, esteem and job security or career opportunities. A survey conducted by Reineck & Furino indicated that registered nurses had a very high regard for the intrinsic rewards of their profession but that they identified workplace relationships and stress issues as some of the most important contributors to their frustration and exhaustion. Hauge, Skogstad & Einarsen state that work-related stress further increases the potential for lateral violence as it creates a negative environment for both the target and the perpetrator.Mindfulness based programs have proven to be a promising intervention in reducing stress experienced by nurses. Mindfulness was originally defined by Jon Kabat-Zinn in 1979 as \"paying attention on purpose, in the present moment, and nonjudgmentally, to the unfolding of experience moment to moment\". The Mindfulness Based Stress Reduction (MBSR) program is an educationally based program that focuses on training in the contemplative practice of mindfulness. It is an eight-week program where participants meet weekly for two-and-a-half hours and join a one-day long retreat for six hours. The program incorporates a combination of mindfulness meditation, body awareness and yoga to help increase mindfulness in participants. The practice is meant to facilitate relaxation in the body and calming of the mind by focusing on present-moment awareness. The program has proven to be effective in reducing stress, improving quality of life and increasing self-compassion in healthcare professionals.Researchers have demonstrated that mindfulness interventions can effectively reduce stress, anxiety and depression in both clinical and non-clinical populations. In a meta-analysis of seven studies conducted with healthy participants from the general public, the reviewers reported a significant reduction in stress when the treatment and control groups were compared. However, there have been limited studies to date that focused specifically on the effectiveness of mindfulness programs to reduce stress experienced by nurses.In addition to stress reduction, mindfulness based interventions can also enhance nurses' capacity for focused attention and concentration by increasing present moment awareness. Mindfulness techniques can be applied in everyday situations as well as stressful situations. According to Kabat-Zinn, work-related stress influences people differently based on their viewpoint and their interpretation of the situation. He states that individuals need to be able to see the whole picture, have perspective on the connectivity of all things and not operate on automatic pilot to effectively cope with stress. The goal of mindfulness meditation is to empower individuals to respond to situations consciously rather than automatically.Prior to the commencement of this systematic review, the Cochrane Library and JBI Database of Systematic Reviews and Implementation Reports were searched. No previous systematic reviews on the topic of reducing stress experienced by nurses through mindfulness programs were identified. Hence, the objective of this systematic review is to evaluate the best research evidence available pertaining to mindfulness-based programs and their effectiveness in reducing perceived stress among nurses.", "title": "" }, { "docid": "06f421d0f63b9dc08777c573840654d5", "text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.", "title": "" }, { "docid": "89b5d821fcb5f9a91612b4936b52ad83", "text": "We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs computed using multiple time resolutions, and compare their performance against other features that are computed using a single time resolution, such as MFCCs, and derivatives of MFCCs. We find that in each task - pair-wise discrimination, and one vs. all classification - the features involving multiscale decompositions perform significantly better than features computed using a single time-resolution.", "title": "" }, { "docid": "118db394bb1000f64154573b2b77b188", "text": "Question answering requires access to a knowledge base to check facts and reason about information. Knowledge in the form of natural language text is easy to acquire, but difficult for automated reasoning. Highly-structured knowledge bases can facilitate reasoning, but are difficult to acquire. In this paper we explore tables as a semi-structured formalism that provides a balanced compromise to this tradeoff. We first use the structure of tables to guide the construction of a dataset of over 9000 multiple-choice questions with rich alignment annotations, easily and efficiently via crowd-sourcing. We then use this annotated data to train a semistructured feature-driven model for question answering that uses tables as a knowledge base. In benchmark evaluations, we significantly outperform both a strong unstructured retrieval baseline and a highlystructured Markov Logic Network model.", "title": "" }, { "docid": "807e008d5c7339706f8cfe71e9ced7ba", "text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future", "title": "" }, { "docid": "d509601659e2192fb4ea8f112c9d75fe", "text": "Computer vision has advanced significantly that many discriminative approaches such as object recognition are now widely used in real applications. We present another exciting development that utilizes generative models for the mass customization of medical products such as dental crowns. In the dental industry, it takes a technician years of training to design synthetic crowns that restore the function and integrity of missing teeth. Each crown must be customized to individual patients, and it requires human expertise in a time-consuming and laborintensive process, even with computer assisted design software. We develop a fully automatic approach that learns not only from human designs of dental crowns, but also from natural spatial profiles between opposing teeth. The latter is hard to account for by technicians but important for proper biting and chewing functions. Built upon a Generative Adversarial Network architecture (GAN), our deep learning model predicts the customized crown-filled depth scan from the crown-missing depth scan and opposing depth scan. We propose to incorporate additional space constraints and statistical compatibility into learning. Our automatic designs exceed human technicians’ standards for good morphology and functionality, and our algorithm is being tested for production use.", "title": "" }, { "docid": "db114a47e7e3d6cc7196d1f73c143bf2", "text": "OBJECT\nSeveral methods are used for stereotactically guided implantation of electrodes into the subthalamic nucleus (STN) for continuous high-frequency stimulation in the treatment of Parkinson's disease (PD). The authors present a stereotactic magnetic resonance (MR) method relying on three-dimensional (3D) T1-weighted images for surgical planning and multiplanar T2-weighted images for direct visualization of the STN, coupled with electrophysiological recording and stimulation guidance.\n\n\nMETHODS\nTwelve patients with advanced PD were enrolled in this study of bilateral STN implantation. Both STNs were visible as 3D ovoid biconvex hypointense structures located in the upper mesencephalon. The coordinates of the centers of the STNs were determined with reference to the patient's anterior commissure-posterior commissure line by using a new landmark, the anterior border of the red nucleus. Electrophysiological monitoring through five parallel tracks was performed simultaneously to define the functional target accurately. Microelectrode recording identified high-frequency, spontaneous, movement-related activity and tremor-related cells within the STNs. Acute STN macrostimulation improved contralateral rigidity and akinesia, suppressed tremor when present, and could induce dyskinesias. The central track, which was directed at the predetermined target by using MR imaging, was selected for implantation of 19 of 24 electrodes. No surgical complications were noted.\n\n\nCONCLUSIONS\nAt evaluation 6 months after surgery, continuous STN stimulation was shown to have improved parkinsonian motor disability by 64% and 78% in the \"off' and \"on\" medication states, respectively. Antiparkinsonian drug treatment was reduced by 70% in 10 patients and withdrawn in two patients. The severity of levodopa-induced dyskinesias was reduced by 83% and motor fluctuations by 88%. Continuous high-frequency stimulation of the STN applied through electrodes implanted with the aid of 3D MR imaging and electrophysiological guidance is a safe and effective therapy for patients suffering from severe, advanced levodopa-responsive PD.", "title": "" }, { "docid": "5f365973899e33de3052dda238db13c1", "text": "The global threat to public health posed by emerging multidrug-resistant bacteria in the past few years necessitates the development of novel approaches to combat bacterial infections. Endolysins encoded by bacterial viruses (or phages) represent one promising avenue of investigation. These enzyme-based antibacterials efficiently kill Gram-positive bacteria upon contact by specific cell wall hydrolysis. However, a major hurdle in their exploitation as antibacterials against Gram-negative pathogens is the impermeable lipopolysaccharide layer surrounding their cell wall. Therefore, we developed and optimized an approach to engineer these enzymes as outer membrane-penetrating endolysins (Artilysins), rendering them highly bactericidal against Gram-negative pathogens, including Pseudomonas aeruginosa and Acinetobacter baumannii. Artilysins combining a polycationic nonapeptide and a modular endolysin are able to kill these (multidrug-resistant) strains in vitro with a 4 to 5 log reduction within 30 min. We show that the activity of Artilysins can be further enhanced by the presence of a linker of increasing length between the peptide and endolysin or by a combination of both polycationic and hydrophobic/amphipathic peptides. Time-lapse microscopy confirmed the mode of action of polycationic Artilysins, showing that they pass the outer membrane to degrade the peptidoglycan with subsequent cell lysis. Artilysins are effective in vitro (human keratinocytes) and in vivo (Caenorhabditis elegans). Importance: Bacterial resistance to most commonly used antibiotics is a major challenge of the 21st century. Infections that cannot be treated by first-line antibiotics lead to increasing morbidity and mortality, while millions of dollars are spent each year by health care systems in trying to control antibiotic-resistant bacteria and to prevent cross-transmission of resistance. Endolysins--enzymes derived from bacterial viruses--represent a completely novel, promising class of antibacterials based on cell wall hydrolysis. Specifically, they are active against Gram-positive species, which lack a protective outer membrane and which have a low probability of resistance development. We modified endolysins by protein engineering to create Artilysins that are able to pass the outer membrane and become active against Pseudomonas aeruginosa and Acinetobacter baumannii, two of the most hazardous drug-resistant Gram-negative pathogens.", "title": "" }, { "docid": "6c3cd29b316d68d555bb85d9f4d48e04", "text": "Power API-the result of collaboration among national laboratories, universities, and major vendors-provides a range of standardized power management functions, from application-level control and measurement to facility-level accounting, including real-time and historical statistics gathering. Support is already available for Intel and AMD CPUs and standalone measurement devices.", "title": "" }, { "docid": "e2c5f409b4d56b9918107d33d5d83c7d", "text": "Dynamic price discrimination adjusts prices based on the option value of future sales, which varies with time and units available. This paper surveys the theoretical literature on dynamic price discrimination, and confronts the theories with new data from airline pricing behavior. Correspondence to: R. Preston McAfee, 100 Baxter Hall, California Institute of Technology, Pasadena, CA 91125, preston@mcafee.cc.", "title": "" }, { "docid": "922dfbab83e5ca879ef29fafa8d3635b", "text": "This chapter was originally published in the book Principles of Addiction. The copy attached is provided by Elsevier for the author's benefit and for the benefit of the author's institution, for non-commercial research, and educational use. This includes without limitation use in instruction at your institution, distribution to specific colleagues, and providing a copy to your institution's administrator.", "title": "" }, { "docid": "d62151bd254b7f031db58aa591360929", "text": "Article history: Received 27 October 2011 Received in revised form 26 February 2012 Accepted 23 June 2012 Available online 6 July 2012", "title": "" }, { "docid": "21723b8c561d2446f94888b53b64bbf7", "text": "A novel two-electrode biosignal amplifier circuit is demonstrated by using a composite transimpedance amplifier input stage with active current feedback. Micropower, low gain-bandwidth product operational amplifiers can be used, leading to the lowest reported overall power consumption in the literature for a design implemented with off-the-shelf commercial integrated circuits (11 μW). Active current feedback forces the common-mode input voltage to stay within the supply rails, reducing baseline drift and amplifier saturation problems that can be present in two-electrode systems. The bandwidth of the amplifier extends from 0.05-200 Hz and the midband voltage gain (assuming an electrode-to-skin resistance of 100 kΩ) is 48 dB. The measured output noise level is 1.2 mV pp, corresponding to a voltage signal-to-noise ratio approaching 50 dB for a typical electrocardiogram (ECG) level input of 1 mVpp. Recordings were taken from a subject by using the proposed two-electrode circuit and, simultaneously, a three-electrode standard ECG circuit. The residual of the normalized ensemble averages for both measurements was computed, and the power of this residual was 0.54% of the power of the standard ECG measurement output. While this paper primarily focuses on ECG applications, the circuit can also be used for amplifying other biosignals, such as the electroencephalogram.", "title": "" }, { "docid": "1e2a64369279d178ee280ed7e2c0f540", "text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.", "title": "" }, { "docid": "d847ed8f2bc209285d80a7d26e577c5b", "text": "We propose a simple yet effective technique for neural network learning. The forward propagation is computed as usual. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k rows or columns (depending on the layout) of the weight matrix are modified, leading to a linear reduction (k divided by the vector dimension) in the computational cost. Surprisingly, experimental results demonstrate that we can update only 1–4% of the weights at each back propagation pass. This does not result in a larger number of training iterations. More interestingly, the accuracy of the resulting models is actually improved rather than degraded, and a detailed analysis is given. The code is available at https://github.com/jklj077/meProp.", "title": "" }, { "docid": "fc65bea49085eaec28d0d4ec28fe7f30", "text": "Student-centered strategies are being incorporated into undergraduate classrooms in response to a call for reform. We tested whether teaching in an extensively student-centered manner (many active-learning pedagogies, consistent formative assessment, cooperative groups; the Extensive section) was more effective than teaching in a moderately student-centered manner (fewer active-learning pedagogies, less formative assessment, without groups; the Moderate section) in a large-enrollment course. One instructor taught both sections of Biology 101 during the same quarter, covering the same material. Students in the Extensive section had significantly higher mean scores on course exams. They also scored significantly higher on a content postassessment when accounting for preassessment score and student demographics. Item response theory analysis supported these results. Students in the Extensive section had greater changes in postinstruction abilities compared with students in the Moderate section. Finally, students in the Extensive section exhibited a statistically greater expert shift in their views about biology and learning biology. We suggest our results are explained by the greater number of active-learning pedagogies experienced by students in cooperative groups, the consistent use of formative assessment, and the frequent use of explicit metacognition in the Extensive section.", "title": "" } ]
scidocsrr
6f84d7439a99bf6e6aa82ad641dee3a4
Revisiting NARX Recurrent Neural Networks for Long-Term Dependencies
[ { "docid": "64a731c3e7d98f90729afc838ccd032c", "text": "It has previously been shown that gradient-descent learning algorithms for recurrent neural networks can perform poorly on tasks that involve long-term dependencies, i.e. those problems for which the desired output depends on inputs presented at times far in the past. We show that the long-term dependencies problem is lessened for a class of architectures called nonlinear autoregressive models with exogenous (NARX) recurrent neural networks, which have powerful representational capabilities. We have previously reported that gradient descent learning can be more effective in NARX networks than in recurrent neural network architectures that have \"hidden states\" on problems including grammatical inference and nonlinear system identification. Typically, the network converges much faster and generalizes better than other networks. The results in this paper are consistent with this phenomenon. We present some experimental results which show that NARX networks can often retain information for two to three times as long as conventional recurrent neural networks. We show that although NARX networks do not circumvent the problem of long-term dependencies, they can greatly improve performance on long-term dependency problems. We also describe in detail some of the assumptions regarding what it means to latch information robustly and suggest possible ways to loosen these assumptions.", "title": "" } ]
[ { "docid": "48c851b54fb489cea937cdfac3ca8132", "text": "This paper describes a new system, dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao-Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM to FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.", "title": "" }, { "docid": "c0fc94aca86a6aded8bc14160398ddea", "text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.", "title": "" }, { "docid": "88de6047cec54692dea08abe752acd25", "text": "Heap-based attacks depend on a combination of memory management error and an exploitable memory allocator. Many allocators include ad hoc countermeasures against particular exploits but their effectiveness against future exploits has been uncertain. This paper presents the first formal treatment of the impact of allocator design on security. It analyzes a range of widely-deployed memory allocators, including those used by Windows, Linux, FreeBSD and OpenBSD, and shows that they remain vulnerable to attack. It them presents DieHarder, a new allocator whose design was guided by this analysis. DieHarder provides the highest degree of security from heap-based attacks of any practical allocator of which we are aware while imposing modest performance overhead. In particular, the Firefox web browser runs as fast with DieHarder as with the Linux allocator.", "title": "" }, { "docid": "3ecbf5194d32a49dfd1cb26b660f2a09", "text": "Acute scrotum presenting as the only initial manifestation of Henoch-Schönlein purpura (HSP) is so unusual that the diagnosis can easily be missed. We report this condition in a 4-year-old boy admitted with bronchopneumonia. Bilateral painful scrotal swelling with ecchymosis occurred on the second day of hospitalization. Scrotal sonography was performed and a good blood supply was documented. Scrotal nuclear scanning was performed and was consistent with bilateral epididymoorchitis. Multiple purpuric lesions over the lower extremities and perineal region developed on the third day of hospitalization. Intermittent abdominal pain and knee pain developed thereafter. HSP was diagnosed and steroids were prescribed. The symptoms subsided gradually and no complication was noted. This case reminds us that an acute scrotum may be the only initial manifestation of HSP. Sonography and nuclear scanning can help rule out other diseases.", "title": "" }, { "docid": "6fb0459adccd26015ee39897da52d349", "text": "Each year, thousands of software vulnerabilities are discovered and reported to the public. Unpatched known vulnerabilities are a significant security risk. It is imperative that software vendors quickly provide patches once vulnerabilities are known and users quickly install those patches as soon as they are available. However, most vulnerabilities are never actually exploited. Since writing, testing, and installing software patches can involve considerable resources, it would be desirable to prioritize the remediation of vulnerabilities that are likely to be exploited. Several published research studies have reported moderate success in applying machine learning techniques to the task of predicting whether a vulnerability will be exploited. These approaches typically use features derived from vulnerability databases (such as the summary text describing the vulnerability) or social media posts that mention the vulnerability by name. However, these prior studies share multiple methodological shortcomings that inflate predictive power of these approaches. We replicate key portions of the prior work, compare their approaches, and show how selection of training and test data critically affect the estimated performance of predictive models. The results of this study point to important methodological considerations that should be taken into account so that results reflect real-world utility.", "title": "" }, { "docid": "c8d936c8878a27015590bd7551023d79", "text": "Rich high-quality annotated data is critical for semantic segmentation learning, yet acquiring dense and pixel-wise ground-truth is both labor- and time-consuming. Coarse annotations (e.g., scribbles, coarse polygons) offer an economical alternative, with which training phase could hardly generate satisfactory performance unfortunately. In order to generate high-quality annotated data with a low time cost for accurate segmentation, in this paper, we propose a novel annotation enrichment strategy, which expands existing coarse annotations of training data to a finer scale. Extensive experiments on the Cityscapes and PASCAL VOC 2012 benchmarks have shown that the neural networks trained with the enriched annotations from our framework yield a significant improvement over that trained with the original coarse labels. It is highly competitive to the performance obtained by using human annotated dense annotations. The proposed method also outperforms among other state-of-the-art weakly-supervised segmentation methods.", "title": "" }, { "docid": "ae5976a021bd0c4ff5ce14525c1716e7", "text": "We present PARAM 1.0, a model checker for parametric discrete-time Markov chains (PMCs). PARAM can evaluate temporal properties of PMCs and certain extensions of this class. Due to parametricity, evaluation results are polynomials or rational functions. By instantiating the parameters in the result function, one can cheaply obtain results for multiple individual instantiations, based on only a single more expensive analysis. In addition, it is possible to post-process the result function symbolically using for instance computer algebra packages, to derive optimum parameters or to identify worst cases.", "title": "" }, { "docid": "852ff3b52b4bf8509025cb5cb751899f", "text": "Digital images are ubiquitous in our modern lives, with uses ranging from social media to news, and even scientific papers. For this reason, it is crucial evaluate how accurate people are when performing the task of identify doctored images. In this paper, we performed an extensive user study evaluating subjects capacity to detect fake images. After observing an image, users have been asked if it had been altered or not. If the user answered the image has been altered, he had to provide evidence in the form of a click on the image. We collected 17,208 individual answers from 383 users, using 177 images selected from public forensic databases. Different from other previously studies, our method propose different ways to avoid lucky guess when evaluating users answers. Our results indicate that people show inaccurate skills at differentiating between altered and non-altered images, with an accuracy of 58%, and only identifying the modified images 46.5% of the time. We also track user features such as age, answering time, confidence, providing deep analysis of how such variables influence on the users’ performance.", "title": "" }, { "docid": "e8b5fcac441c46e46b67ffbdd4b043e6", "text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.", "title": "" }, { "docid": "9a79af1c226073cc129087695295a4e5", "text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.", "title": "" }, { "docid": "bade302d28048eeb0578e5289e7dba23", "text": "The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry. HPC Component Architecture 4", "title": "" }, { "docid": "b2a04969dc7d99eb7805807ce4961825", "text": "Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximations and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.", "title": "" }, { "docid": "142f47f01a81b7978f65ea63460d98e5", "text": "The developers of StarDog OWL/RDF DBMS have pioneered a new use of OWL as a schema language for RDF databases. This is achieved by adding integrity constraints (IC), also expressed in OWL syntax, to the traditional “open-world” OWL axioms. The new database paradigm requires a suitable visual schema editor. We propose here a two-level approach for integrated visual UML-style editing of extended OWL+IC ontologies: (i) introduce the notion of ontology splitter that can be used in conjunction with any OWL editor, and (ii) offer a custom graphical notation for axiom level annotations on the basis of compact UML-style OWL ontology editor OWLGrEd.", "title": "" }, { "docid": "d9230fd294bfad81756a1ebd94dc6adb", "text": "Linguistic and conceptual development converge crucially in the process of early word learning. Acquiring a new word requires the child to identify a conceptual unit, identify a linguistic unit, and establish a mapping between them. On the conceptual side, the child has to not only identify the relevant part of the scene being labeled, but also isolate a concept at the correct level of abstraction-the word 'dog' must be mapped to the concept dog and not to the concepts petting or collie, for example. On the linguistic side, the child must use the syntactic context in which the word appears to determine its grammatical category (e.g., noun, verb, adjective). But she also uses syntactic information, along with observation of the world and social-communicative cues, to make guesses at which concept the word picks out as well as its level of abstraction. We present evidence that young learners learn new words rapidly and extend them appropriately. However, the relative import of observational and linguistic cues varies as a function of the kind of word being acquired, with verbs requiring a richer set of conceptual and linguistic cues than nouns. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website.", "title": "" }, { "docid": "02f07e92c38a456ad23ffd869ef517fb", "text": "Recently, Mahalanobis metric learning has gained a considerable interest for single-shot person re-identification. The main idea is to build on an existing image representation and to learn a metric that reflects the visual camera-to-camera transitions, allowing for a more powerful classification. The goal of this chapter is twofold. We first review the main ideas of Mahalanobis metric learning in general and then give a detailed study on different approaches for the task of single-shot person re-identification, also comparing to the state-of-the-art. In particular, for our experiments we used Linear Discriminant Metric Learning (LDML), Information Theoretic Metric Learning (ITML), Large Margin Nearest Neighbor (LMNN), Large Margin Nearest Neighbor with Rejection (LMNN-R), Efficient Impostor-based Metric Learning (EIML), and KISSME. For our evaluations we used four different publicly available datasets (i.e., VIPeR, ETHZ, PRID 2011, and CAVIAR4REID). Additionally, we generated the new, more realistic PRID 450S dataset, where we also provide detailed segmentations. For the latter one, we also evaluated the influence of using well segmented foreground and background regions. Finally, the corresponding results are presented and discussed.", "title": "" }, { "docid": "826081f0775f6ab0a16170519a3a277d", "text": "Convolutional neural networks have been applied to a wide variety of computer vision tasks. Recent advances in semantic segmentation have enabled their application to medical image segmentation. While most CNNs use two-dimensional kernels, recent CNN-based publications on medical image segmentation featured three-dimensional kernels, allowing full access to the three-dimensional structure of medical images. Though closely related to semantic segmentation, medical image segmentation includes specific challenges that need to be addressed, such as the scarcity of labelled data, the high class imbalance found in the ground truth and the high memory demand of three-dimensional images. In this work, a CNN-based method with three-dimensional filters is demonstrated and applied to hand and brain MRI. Two modifications to an existing CNN architecture are discussed, along with methods on addressing the aforementioned challenges. While most of the existing literature on medical image segmentation focuses on soft tissue and the major organs, this work is validated on data both from the central nervous system as well as the bones of the hand.", "title": "" }, { "docid": "92099d409e506a776853d4ae80c4285e", "text": "Arti…cial intelligence (AI) has achieved superhuman performance in a growing number of tasks, but understanding and explaining AI remain challenging. This paper clari…es the connections between machine-learning algorithms to develop AIs and the econometrics of dynamic structural models through the case studies of three famous game AIs. Chess-playing Deep Blue is a calibrated value function, whereas shogiplaying Bonanza is an estimated value function via Rust’s (1987) nested …xed-point method. AlphaGo’s “supervised-learning policy network” is a deep neural network implementation of Hotz and Miller’s (1993) conditional choice probability estimation; its “reinforcement-learning value network”is equivalent to Hotz, Miller, Sanders, and Smith’s (1994) conditional choice simulation method. Relaxing these AIs’ implicit econometric assumptions would improve their structural interpretability. Keywords: Arti…cial intelligence, Conditional choice probability, Deep neural network, Dynamic game, Dynamic structural model, Simulation estimator. JEL classi…cations: A12, C45, C57, C63, C73. First version: October 30, 2017. This paper bene…ted from seminar comments at Riken AIP, Georgetown, Tokyo, Osaka, Harvard, and The Third Cambridge Area Economics and Computation Day conference at Microsoft Research New England, as well as conversations with Susan Athey, Xiaohong Chen, Jerry Hausman, Greg Lewis, Robert Miller, Yusuke Narita, Aviv Nevo, Anton Popov, John Rust, Takuo Sugaya, Elie Tamer, and Yosuke Yasuda. yYale Department of Economics and MIT Department of Economics. E-mail: mitsuru.igami@gmail.com.", "title": "" }, { "docid": "6cc046077267564ed38ed3b28e593ef1", "text": "Human activity recognition is an active area of research in Computer Vision. One of the challenges of activity recognition system is the presence of noise between related activity classes along with high training and testing time complexity of the system. In this paper, we address these problems by introducing a Robust Least Squares Twin Support Vector Machine (RLS-TWSVM) algorithm. RLS-TWSVM handles the heteroscedastic noise and outliers present in activity recognition framework. Incremental RLS-TWSVM is proposed to speed up the training phase. Further, we introduce the hierarchical approach with RLS-TWSVM to deal with multi-category activity recognition problem. Computational comparisons of our proposed approach on four wellknown activity recognition datasets along with real world machine learning benchmark datasets have been carried out. Experimental results show that our method is not only fast but, yields significantly better generalization performance and is robust in order to handle heteroscedastic noise and outliers.", "title": "" }, { "docid": "6bbbddca9ba258afb25d6e8af9bfec82", "text": "With the ever increasing popularity of electronic commerce, the evaluation of antecedents and of customer satisfaction have become very important for the cyber shopping store (CSS) and for researchers. The various models of customer satisfaction that researchers have provided so far are mostly based on the traditional business channels and thus may not be appropriate for CSSs. This research has employed case and survey methods to study the antecedents of customer satisfaction. Though case methods a research model with hypotheses is developed. And through survey methods, the relationships between antecedents and satisfaction are further examined and analyzed. We find five antecedents of customer satisfaction to be more appropriate for online shopping on the Internet. Among them homepage presentation is a new and unique antecedent which has not existed in traditional marketing.", "title": "" } ]
scidocsrr
b91af8abf4811c4f7ca1c74aa8283d5d
Double-Deck Buck-Boost Converter With Soft Switching Operation
[ { "docid": "cf6816d0a38296a3dc2c04894a102283", "text": "This paper presents a high-efficiency positive buck- boost converter with mode-select circuits and feed-forward techniques. Four power transistors produce more conduction and more switching losses when the positive buck-boost converter operates in buck-boost mode. Utilizing the mode-select circuit, the proposed converter can decrease the loss of switches and let the positive buck-boost converter operate in buck, buck-boost, or boost mode. By adding feed-forward techniques, the proposed converter can improve transient response when the supply voltages are changed. The proposed converter has been fabricated with TSMC 0.35-μm CMOS 2P4M processes. The total chip area is 2.59 × 2.74 mm2 (with PADs), the output voltage is 3.3 V, and the regulated supply voltage range is from 2.5-5 V. Its switching frequency is 500 kHz and the maximum power efficiency is 91.6% as the load current equals 150 mA.", "title": "" } ]
[ { "docid": "029687097e06ed2d0132ca2fce393129", "text": "The V-band systems have been widely used in the aerospace industry for securing spacecraft inside the launch vehicle payload fairing. Separation is initiated by firing pyro-devices to rapidly release the tension bands. A significant shock transient is expected as a result of the band separation. The shock environment is defined with the assumption that the shock events due to the band separation are associated with the rapid release of the strain energy from the preload tension of the restraining band.", "title": "" }, { "docid": "4133f95e620cfec651eb8d7540b5bdda", "text": "Stock market is considered chaotic, complex, volatile and dynamic. Undoubtedly, its prediction is one of the most challenging tasks in time series forecasting. Moreover existing Artificial Neural Network (ANN) approaches fail to provide encouraging results. Meanwhile advances in machine learning have presented favourable results for speech recognition, image classification and language processing. Methods applied in digital signal processing can be applied to stock data as both are time series. Similarly, learning outcome of this paper can be applied to speech time series data. Deep learning for stock prediction has been introduced in this paper and its performance is evaluated on Google stock price multimedia data (chart) from NASDAQ. The objective of this paper is to demonstrate that deep learning can improve stock market forecasting accuracy. For this, (2D)2PCA + Deep Neural Network (DNN) method is compared with state of the art method 2-Directional 2-Dimensional Principal Component Analysis (2D)2PCA + Radial Basis Function Neural Network (RBFNN). It is found that the proposed method is performing better than the existing method RBFNN with an improved accuracy of 4.8% for Hit Rate with a window size of 20. Also the results of the proposed model are compared with the Recurrent Neural Network (RNN) and it is found that the accuracy for Hit Rate is improved by 15.6%. The correlation coefficient between the actual and predicted return for DNN is 17.1% more than RBFNN and it is 43.4% better than RNN.", "title": "" }, { "docid": "b29eec00ba053979967a61f595f22dfa", "text": "A novel method is presented for electrically tuning the frequency of a planar inverted-F antenna (PIFA). A tuning circuit, comprising an RF switch and discrete passive components, has been completely integrated into the antenna element, which is thus free of dc wires. The proposed tuning method has been demonstrated with a dual-band PIFA capable of operating in four frequency bands. The antenna covers the GSM850, GSM900, GSM1800, PCS1900 and UMTS frequency ranges with over 40% total efficiency. The impact of the tuning circuit on the antenna's efficiency and radiation pattern have been experimentally studied through comparison with the performance of a reference antenna not incorporating the tuning circuit. The proposed frequency tuning concept can be extended to more complex PIFA structures as well as other types of antennas to give enhanced electrical performance.", "title": "" }, { "docid": "031dbd65ecb8d897d828cd5d904059c1", "text": "Especially in ill-defined problems like complex, real-world tasks more than one way leads to a solution. Until now, the evaluation of information visualizations was often restricted to measuring outcomes only (time and error) or insights into the data set. A more detailed look into the processes which lead to or hinder task completion is provided by analyzing users' problem solving strategies. A study illustrates how they can be assessed and how this knowledge can be used in participatory design to improve a visual analytics tool. In order to provide the users a tool which functions as a real scaffold, it should allow them to choose their own path to Rome. We discuss how evaluation of problem solving strategies can shed more light on the users' \"exploratory minds\".", "title": "" }, { "docid": "4c4376a25aa61e891294708b753dcfec", "text": "Ransomware, a class of self-propagating malware that uses encryption to hold the victims’ data ransom, has emerged in recent years as one of the most dangerous cyber threats, with widespread damage; e.g., zero-day ransomware WannaCry has caused world-wide catastrophe, from knocking U.K. National Health Service hospitals offline to shutting down a Honda Motor Company in Japan [1]. Our close collaboration with security operations of large enterprises reveals that defense against ransomware relies on tedious analysis from high-volume systems logs of the first few infections. Sandbox analysis of freshly captured malware is also commonplace in operation. We introduce a method to identify and rank the most discriminating ransomware features from a set of ambient (non-attack) system logs and at least one log stream containing both ambient and ransomware behavior. These ranked features reveal a set of malware actions that are produced automatically from system logs, and can help automate tedious manual analysis. We test our approach using WannaCry and two polymorphic samples by producing logs with Cuckoo Sandbox during both ambient, and ambient plus ransomware executions. Our goal is to extract the features of the malware from the logs with only knowledge that malware was present. We compare outputs with a detailed analysis of WannaCry allowing validation of the algorithm’s feature extraction and provide analysis of the method’s robustness to variations of input data—changing quality/quantity of ambient data and testing polymorphic ransomware. Most notably, our patterns are accurate and unwavering when generated from polymorphic WannaCry copies, on which 63 (of 63 tested) antivirus (AV) products fail.", "title": "" }, { "docid": "88bb56e36c493ed2ac723acbc6090f2b", "text": "In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with an arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, the positions of point samples are approximated by the geometry centers of all tree-front cells, whereas normals and colors are approximated by their statistical average within each of the tree-front cells. With this framework, we employ attribute-dependent encoding techniques to exploit the different characteristics of various attributes. All of these have led to a significant improvement in the rate-distortion (R-D) performance and a computational advantage over the state of the art. Furthermore, given sufficient levels of octree expansion, normal space partitioning, and resolution of color quantization, the proposed point cloud encoder can be potentially used for lossless coding of 3D point clouds.", "title": "" }, { "docid": "cb08df0c8ff08eecba5d7fed70c14f1e", "text": "In this article, we propose a family of efficient kernels for l a ge graphs with discrete node labels. Key to our method is a rapid feature extraction scheme b as d on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequ ence of graphs, whose node attributes capture topological and label information. A fami ly of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly e ffici nt kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of e dges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classifica tion benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale ap plic tions of graph kernels in various disciplines such as computational biology and social netwo rk analysis.", "title": "" }, { "docid": "87aef15dc90a8981bda3fcc5b8045d7c", "text": "Human groups show structured levels of genetic similarity as a consequence of factors such as geographical subdivision and genetic drift. Surveying this structure gives us a scientific perspective on human origins, sheds light on evolutionary processes that shape both human adaptation and disease, and is integral to effectively carrying out the mission of global medical genetics and personalized medicine. Surveys of population structure have been ongoing for decades, but in the past three years, single-nucleotide-polymorphism (SNP) array technology has provided unprecedented detail on human population structure at global and regional scales. These studies have confirmed well-known relationships between distantly related populations and uncovered previously unresolvable relationships among closely related human groups. SNPs represent the first dense genome-wide markers, and as such, their analysis has raised many challenges and insights relevant to the study of population genetics with whole-genome sequences. Here we draw on the lessons from these studies to anticipate the directions that will be most fruitful to pursue during the emerging whole-genome sequencing era.", "title": "" }, { "docid": "f71987051ad044673c8b41709cb34df7", "text": "The quality and the correctness of software are often the greatest concern in electronic systems. Formal verification tools can provide a guarantee that a design is free of specific flaws. This paper surveys algorithms that perform automatic static analysis of software to detect programming errors or prove their absence. The three techniques considered are static analysis with abstract domains, model checking, and bounded model checking. A short tutorial on these techniques is provided, highlighting their differences when applied to practical problems. This paper also surveys tools implementing these techniques and describes their merits and shortcomings.", "title": "" }, { "docid": "ce2d1c0e113aafdb0db35a3e21c7f0ff", "text": "Previous works on facial expression analysis have shown that person specific models are advantageous with respect to generic ones for recognizing facial expressions of new users added to the gallery set. This finding is not surprising, due to the often significant inter-individual variability: different persons have different morphological aspects and express their emotions in different ways. However, acquiring person-specific labeled data for learning models is a very time consuming process. In this work we propose a new transfer learning method to compute personalized models without labeled target data Our approach is based on learning multiple person-specific classifiers for a set of source subjects and then directly transfer knowledge about the parameters of these classifiers to the target individual. The transfer process is obtained by learning a regression function which maps the data distribution associated to each source subject to the corresponding classifier's parameters. We tested our approach on two different application domains, Action Units (AUs) detection and spontaneous pain recognition, using publicly available datasets and showing its advantages with respect to the state-of-the-art both in term of accuracy and computational cost.", "title": "" }, { "docid": "b27038accdabab12d8e0869aba20a083", "text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.", "title": "" }, { "docid": "4cda02d9f5b5b16773b8cbffc54e91ca", "text": "We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.", "title": "" }, { "docid": "e361ea943f0e8baded23d179dfb612d0", "text": "In contrast to applications relying on specialized and expensive highly-available infrastructure, the basic approach of microservice architectures to achieve fault tolerance – and finally high availability – is to modularize the software system into small, self-contained services that are connected via implementation-independent interfaces. Microservices and all dependencies are deployed into self-contained environments called containers that are executed as multiple redundant instances. If a service fails, other instances will often still work and take over. Due to the possibility of failing infrastructure, these services have to be deployed on several physical systems. This horizontal scaling of redundant service instances can also be used for load-balancing. Decoupling the service communication using asynchronous message queues can increase fault tolerance, too. The Deutsche Bahn AG (German railway company) uses as system called EPA for seat reservations for inter-urban rail services. Despite its high availability, the EPA system in its current state has several disadvantages such as high operational cost, need for special hardware, technological dependencies, and expensive and time-consuming updates. With the help of a prototype, we evaluate the general properties of a microservice architecture and its dependability with reference to the legacy system. We focus on requirements for an equivalent microservice-based system and the migration process; services and data, containerization, communication via message queues; and achieving similar fault tolerance and high availability with the help of replication inside the resulting architecture.", "title": "" }, { "docid": "fed5b83e2e35a3a5e2c8df38d96be981", "text": "The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.", "title": "" }, { "docid": "db0581e9f46516ee1ed26937bbec515b", "text": "In this paper we address the problem of offline Arabic handwriting word recognition. Offline recognition of handwritten words is a difficult task due to the high variability and uncertainty of human writing. The majority of the recent systems are constrained by the size of the lexicon to deal with and the number of writers. In this paper, we propose an approach for multi-writers Arabic handwritten words recognition using multiple Bayesian networks. First, we cut the image in several blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including Zernik and Hu moments. Finally, we apply four variants of Bayesian networks classifiers (Naïve Bayes, Tree Augmented Naïve Bayes (TAN), Forest Augmented Naïve Bayes (FAN) and DBN (dynamic bayesian network) to classify the whole image of tunisian city name. The results demonstrate FAN and DBN outperform good recognition rates.", "title": "" }, { "docid": "bb7ba369cd3baf1f5ba26aef7b5574fb", "text": "Static computer vision techniques enable non-intrusive observation and analysis of biometrics such as eye blinks. However, ambiguous eye behaviors such as partial blinks and asymmetric eyelid movements present problems that computer vision techniques relying on static appearance alone cannot solve reliably. Image flow analysis enables reliable and efficient interpretation of these ambiguous eye blink behaviors. In this paper we present a method for using image flow analysis to compute problematic eye blink parameters. The flow analysis produces the magnitude and direction of the eyelid movement. A deterministic finite state machine uses the eyelid movement data to compute blink parameters (e.g., blink count, blink rate, and other transitional statistics) for use in human computer interaction applications across a wide range of disciplines. We conducted extensive experiments employing this method on approximately 750K color video frames of five subjects", "title": "" }, { "docid": "947720ca5d07b210f3d519c7e8e93081", "text": "Previous work has shown that acts of self-regulation appear to deplete a psychological resource, resulting in poorer self-regulation subsequently. Four experiments using assorted manipulations and measures found that positive mood or emotion can counteract ego depletion. After an initial act of self-regulation, participants who watched a comedy video or received a surprise gift self-regulated on various tasks as well as non-depleted participants and signiWcantly better than participants who experienced a sad mood induction, a neutral mood stimulus, or a brief rest period. © 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7110e68a420d10fa75a943d1c1f0bd42", "text": "This paper proposes a compact microstrip Yagi-Uda antenna for 2.45 GHz radio frequency identification (RFID) handheld reader applications. The proposed antenna is etched on a piece of FR4 substrate with an overall size of 65 mm × 55 mm ×1.6 mm and consists of a microstrip balun, a dipole, and a director. The ground plane is designed to act as a reflector that contributes to enhancing the antenna gain. The measured 10-dB return loss bandwidth and peak gain achieved by the proposed antenna are 380 MHz and 7.5 dBi, respectively. In addition, a parametric study is conducted to facilitate the design and optimization processes for engineers.", "title": "" }, { "docid": "59da726302c06abef243daee87cdeaa7", "text": "The present research aims at gaining a better insight on the psychological barriers to the introduction of social robots in society at large. Based on social psychological research on intergroup distinctiveness, we suggested that concerns toward this technology are related to how we define and defend our human identity. A threat to distinctiveness hypothesis was advanced. We predicted that too much perceived similarity between social robots and humans triggers concerns about the negative impact of this technology on humans, as a group, and their identity more generally because similarity blurs category boundaries, undermining human uniqueness. Focusing on the appearance of robots, in two studies we tested the validity of this hypothesis. In both studies, participants were presented with pictures of three types of robots that differed in their anthropomorphic appearance varying from no resemblance to humans (mechanical robots), to some body shape resemblance (biped humanoids) to a perfect copy of human body (androids). Androids raised the highest concerns for the potential damage to humans, followed by humanoids and then mechanical robots. In Study 1, we further demonstrated that robot anthropomorphic appearance (and not the attribution of mind and human nature) was responsible for the perceived damage that the robot could cause. In Study 2, we gained a clearer insight in the processes B Maria Paola Paladino mariapaola.paladino@unitn.it Francesco Ferrari francesco.ferrari-1@unitn.it Jolanda Jetten j.jetten@psy.uq.edu.au 1 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy 2 School of Psychology, The University of Queensland, St Lucia, QLD 4072, Australia underlying this effect by showing that androids were also judged as most threatening to the human–robot distinction and that this perception was responsible for the higher perceived damage to humans. Implications of these findings for social robotics are discussed.", "title": "" } ]
scidocsrr
8bfb68a199f892182ba20c0b12bd3046
A High Efficiency Accelerator for Deep Neural Networks
[ { "docid": "065ca3deb8cb266f741feb67e404acb5", "text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet", "title": "" } ]
[ { "docid": "dc6fbeeb1d587f982dbbdf8c0f6d8953", "text": "With a wealth of scientifically proven health benefits, meditation was enjoyed by about 18 million people in the U.S. alone, as of 2012. Yet, there remains a stunning lack of convenient tools for promoting long-term and effective meditation practice. In this paper, we present MindfulWatch, a practical smartwatch-based sensing system that monitors respiration in real-time during meditation -- offering essential biosignals that can potentially be used to empower various future applications such as tracking changes in breathing pattern, offering real-time guidance, and providing an accurate bio-marker for meditation research. To this end, MindfulWatch is designed to be convenient for everyday use with no training required. Operating solely on a smartwatch, MindfulWatch can immediately reach the growing population of smartwatch users, making it ideal for longitudinal data collection for meditation studies. Specifically, it utilizes motion sensors to sense the subtle “micro” wrist rotation (0.01 rad/s) induced by respiration. To accurately capture breathing, we developed a novel self-adaptive model that tracks changes in both breathing pattern and meditation posture over time. MindfulWatch was evaluated based on data from 36 real-world meditation sessions (8.7 hours, 11 subjects). The results suggest that MindfulWatch offers reliable real-time respiratory timing measurement (70% errors under 0.5 seconds).", "title": "" }, { "docid": "be97998aa94ecb00bc01ef0e14254634", "text": "A recommender systemin ane-learningcontext is a software agent that tries to ”intellig ently” recommendactions to a learnerbasedon theactionsof previouslearners. This recommendationcould be an on-line activity such as doing anexercise, readingpostedmessagesona conferencing system,or runningan on-linesimulation,or couldbesimply a webresource. Theserecommendationsystemshave beentried in e-commer ceto enticepurchasingof goods,but haven’ t beentried in e-learning. This papersuggeststhe useof webmining techniquesto build such an agent that couldrecommendon-linelearningactivitiesor shortcutsin a coursewebsite basedon learners’ accesshistory to improve coursematerial navigationas well as assistthe online learningprocess.Thesetechniquesare consider ed integratedwebminingasopposedto off-line webminingused byexpertusers to discoveron-lineaccesspatterns.", "title": "" }, { "docid": "d3fc62a9858ddef692626b1766898c9f", "text": "In order to detect the Cross-Site Script (XSS) vulnerabilities in the web applications, this paper proposes a method of XSS vulnerability detection using optimal attack vector repertory. This method generates an attack vector repertory automatically, optimizes the attack vector repertory using an optimization model, and detects XSS vulnerabilities in web applications dynamically. To optimize the attack vector repertory, an optimization model is built in this paper with a machine learning algorithm, reducing the size of the attack vector repertory and improving the efficiency of XSS vulnerability detection. Based on this method, an XSS vulnerability detector is implemented, which is tested on 50 real-world websites. The testing results show that the detector can detect a total of 848 XSS vulnerabilities effectively in 24 websites.", "title": "" }, { "docid": "867e59b8f2dd4ccc0fdd3820853dc60e", "text": "Software product lines are hard to configure. Techniques that work for medium sized product lines fail for much larger product lines such as the Linux kernel with 6000+ features. This paper presents simple heuristics that help the Indicator-Based Evolutionary Algorithm (IBEA) in finding sound and optimum configurations of very large variability models in the presence of competing objectives. We employ a combination of static and evolutionary learning of model structure, in addition to utilizing a pre-computed solution used as a “seed” in the midst of a randomly-generated initial population. The seed solution works like a single straw that is enough to break the camel's back -given that it is a feature-rich seed. We show promising results where we can find 30 sound solutions for configuring upward of 6000 features within 30 minutes.", "title": "" }, { "docid": "8fd3cf98de49be86d14647368324dd75", "text": "System administration has often been considered to be a “practice” with no theoretical underpinnings. In this thesis, we begin to define a theory of system administration, based upon two activities of system administrators: configuration management and dependency analysis. We formalize and explore the complexity of these activities, and demonstrate that they are intractable in the general case. We define the concepts of system behavior, kinds of configuration operations, a model of configuration management, a model of reproducibility, and proofs that several parts of the process are NP-complete or NP-hard. We also explore how system administrators keep these tasks tractable in practice. This is a first step toward a theory of system administration and a common language for discussing the theoretical underpinnings of the practice.", "title": "" }, { "docid": "354b35bb1c51442a7e855824ab7b91e0", "text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.", "title": "" }, { "docid": "2232dfe1e46032d58f4e110bc850dcbc", "text": "Quantization methods have been introduced to perform large scale approximate nearest search tasks. Residual Vector Quantization (RVQ) is one of the effective quantization methods. RVQ uses a multi-stage codebook learning scheme to lower the quantization error stage by stage. However, there are two major limitations for RVQ when applied to on high-dimensional approximate nearest neighbor search: 1. The performance gain diminishes quickly with added stages. 2. Encoding a vector with RVQ is actually NP-hard. In this paper, we propose an improved residual vector quantization (IRVQ) method, our IRVQ learns codebook with a hybrid method of subspace clustering and warm-started kmeans on each stage to prevent performance gain from dropping, and uses a multi-path encoding scheme to encode a vector with lower distortion. Experimental results on the benchmark datasets show that our method gives substantially improves RVQ and delivers better performance compared to the state-of-the-art. Introduction Nearest neighbor search is a fundamental problem in many computer vision applications such as image retrieval (Rui, Huang, and Chang 1999) and image recognition (Lowe 1999). In high dimensional data-space, nearest neighbor search becomes very expensive due to the curse of dimensionality (Indyk and Motwani 1998). Approximate nearest neighbor (ANN) search is a much more practical approach. Quantization-based algorithms have recently been developed to perform ANN search tasks. They achieved superior performances against other ANN search methods (Jegou, Douze, and Schmid 2011). Product Quantization (Jegou, Douze, and Schmid 2011) is a representative quantization algorithm. PQ splits the original d-dimensional data vector intoM disjoint sub-vectors and learn M codebooks {C1 · · ·CM}, where each codebook contains K codewordsCm = {cm(1), · · · , cm(K)},m ∈ 1 · · ·M . Then the original data vector is approximated by the Cartesian product of the codewords it has been assigned to. PQ allows fast distance computation between a quantized vector x and an input query vector q via asymmetric distance Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. computation (ADC): the distances between q and all codewordscm(k),m ∈ 1 · · ·M,k ∈ 1 · · ·K are precomputed, then the approximate distance between q andx can be efficiently computed by the sum of distances between q and codewords ofx in O(M) time. Compared to the exact distance computation taking O(d) time, the time complexity is drastically reduced. Product Quantization is based on the assumption that the sub-vectors are statistically mutual independent, such th at the original vector can be effectively represented by the Cartesian product of quantized sub-vectors. However vectors in real data do not all meet that assumption. Optimized Product Quantization (OPQ) (Ge et al. 2013) and Cartesian K-means (Norouzi and Fleet 2013) are proposed to find an optimal subspace decomposition to overcome this issue. Residual Vector Quantization (RVQ) (Chen, Guan, and Wang 2010) is an alternative approach to perform approximate nearest neighbor search task. Similar to Additive Quantization (AQ) (Babenko and Lempitsky 2014) and Composite Quantization (Ting Zhang 2014), RVQ approximates the original vector as the sum of codewords instead of Cartesian product. Asymmetric distance computation can also be applied to data quantized by RVQ. RVQ adopts a multi-stage clustering scheme, on each stage the residual vectors are clustered instead of a segment of the original vector. Compared to PQ, RVQ naturally produces mutually independent codebooks. However, the gain of adding an additional stage drops quickly as residual vectors become more random, limiting the effectiveness of multi-stage methods to only a few stages (Gersho and Gray 1992). A direct observation is that the encodings of codebooks learned on the latter stages have low information entropy. Moreover, encoding a vector with dictionaries learned by RVQ is essentially a high-orde r Markov random field problem, which is NP-hard. In this paper, we propose the Improved Residual Vector Quantization (IRVQ). IRVQ uses a hybrid method of subspaces clustering and warm-started k-means to obtain high information entropy for each codebook, and uses a multipath search method to obtain a better encoding. The basic idea behind IRVQ is rather simple: 1. Subspace clustering generally produces high informatio n entropy codebook. Though we seek a clustering on the whole feature space, such codebook is still useful. We utilize these information by warm-start k-means with this codebook. 2. The norms of codewords reduce stage by stage. Though the naive ”greedy” encoding fails to produce optimal encoding, a less ”greedy” encoding is more likely to obtain the optimal encoding. We propose a multi-path encoding algorithm for learn codebooks. The codebooks learned by IRVQ are mutually independent and each codebook has high information entropy. And a significantly lower quantization error observed compared to RVQ and other state-of-the-art methods. We have validated our method on two commonly used datasets for evaluating ANN search performance: SIFT-1M and GIST1M (Jegou, Douze, and Schmid 2011). The empirical results show that our IRVQ improves RVQ significantly. Our IRVQ also outperforms other state-of-the-art quantization met hods such as PQ, OPQ, and AQ. Residual Vector Quantization Residual vector quantization (RVQ) (Juang and Gray Jr 1982) is a common technique to approximate original data with several low complexity quantizers, instead of a prohibitive high complexity quantizer. RVQ reduces the quantization error by learning quantizers on the residues. RVQ is introduced to perform ANN-search in (Chen, Guan, and Wang 2010), The gain of adding an additional stage relies on the commonality among residual vectors from different cluster centers. Thu s on high-dimensional data this approach performs badly. Information Entropy It has been observed that the residual vectors become very random with increasing stages, limiting the effectiveness of RVQ to a small number of stages. To begin with, we first examine the encoded dataset by RVQ from the point of view of information entropy. For hashing based approximate nearest neighbor search methods, e.g. Spectral Hashing (Weiss, Torralba, and Fergus 2009), we seek a code that each bit has a 50% chance of being one or zero, and different bits are mutually independent. Similarly, we would like to obtain maximum information entropy S(Cm), defined below, for each codebook and no mutual information between different codebooks.", "title": "" }, { "docid": "d633f883c3dd61c22796a5774a56375c", "text": "Neural networks are the topic of this paper. Neural networks are very powerful as nonlinear signal processors, but obtained results are often far from satisfactory. The purpose of this article is to evaluate the reasons for these frustrations and show how to make these neural networks successful. The following are the main challenges of neural network applications: (1) Which neural network architectures should be used? (2) How large should a neural network be? (3) Which learning algorithms are most suitable? The multilayer perceptron (MLP) architecture is unfortunately the preferred neural network topology of most researchers. It is the oldest neural network architecture, and it is compatible with all training softwares. However, the MLP topology is less powerful than other topologies such as bridged multilayer perceptron (BMLP), where connections across layers are allowed. The error-back propagation (EBP) algorithm is the most popular learning algorithm, but it is very slow and seldom gives adequate results. The EBP training process requires 100-1,000 times more iterations than the more advanced algorithms such as Levenberg-Marquardt (LM) or neuron by neuron (NBN) algorithms. What is most important is that the EBP algorithm is not only slow but often it is not able to find solutions for close-to-optimum neural networks. The paper describes and compares several learning algorithms.", "title": "" }, { "docid": "00679e6e34f404e01adc6d3315d7964e", "text": "Immature embryos and embryogenic calli of rice, both japonica and indica subspecies, were bombarded with tungsten particles coated with plasmid DNA that contained a gene encoding hygromycin phosphotransferase (HPH, conferring hygromycin resistance) driven by the CaMV 35S promoter or Agrobactenum tumefaciens NOS promoter. Putatively transformed cell clusters were identified from the bombarded tissues 2 weeks after selection on hygromycin B. By separating these cell clusters from each other, and by stringent selection not only at the callus growth stage but also during regeneration and plantlet growth, the overall transformation and selection efficiencies were substantially improved over those previously reported. From the most responsive cultivar used in these studies, an average of one transgenic plant was produced from 1.3 immature embryos or from 5 pieces of embryogenic calli bombarded. Integration of the introduced gene into the plant genome, and inheritance to the offspring were demonstrated. By using this procedure, we have produced several hundred transgenic plants. The procedure described here provides a simple method for improving transformation and selection efficiencies in rice and may be applicable to other monocots.", "title": "" }, { "docid": "27f773226c458febb313fd48b59c7222", "text": "This thesis presents extensions to the local binary pattern (LBP) texture analysis operator. The operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. It is made invariant against the rotation of the image domain, and supplemented with a rotation invariant measure of local contrast. The LBP is proposed as a unifying texture model that describes the formation of a texture with micro-textons and their statistical placement rules. The basic LBP is extended to facilitate the analysis of textures with multiple scales by combining neighborhoods with different sizes. The possible instability in sparse sampling is addressed with Gaussian low-pass filtering, which seems to be somewhat helpful. Cellular automata are used as texture features, presumably for the first time ever. With a straightforward inversion algorithm, arbitrarily large binary neighborhoods are encoded with an eight-bit cellular automaton rule, resulting in a very compact multi-scale texture descriptor. The performance of the new operator is shown in an experiment involving textures with multiple spatial scales. An opponent-color version of the LBP is introduced and applied to color textures. Good results are obtained in static illumination conditions. An empirical study with different color and texture measures however shows that color and texture should be treated separately. A number of different applications of the LBP operator are presented, emphasizing real-time issues. A very fast software implementation of the operator is introduced, and different ways of speeding up classification are evaluated. The operator is successfully applied to industrial visual inspection applications and to image retrieval.", "title": "" }, { "docid": "b44ebb850ce2349dddc35bbf9a01fb8a", "text": "Automatically assessing emotional valence in human speech has historically been a difficult task for machine learning algorithms. The subtle changes in the voice of the speaker that are indicative of positive or negative emotional states are often “overshadowed” by voice characteristics relating to emotional intensity or emotional activation. In this work we explore a representation learning approach that automatically derives discriminative representations of emotional speech. In particular, we investigate two machine learning strategies to improve classifier performance: (1) utilization of unlabeled data using a deep convolutional generative adversarial network (DCGAN), and (2) multitask learning. Within our extensive experiments we leverage a multitask annotated emotional corpus as well as a large unlabeled meeting corpus (around 100 hours). Our speaker-independent classification experiments show that in particular the use of unlabeled data in our investigations improves performance of the classifiers and both fully supervised baseline approaches are outperformed considerably. We improve the classification of emotional valence on a discrete 5-point scale to 43.88% and on a 3-point scale to 49.80%, which is competitive to state-of-the-art performance.", "title": "" }, { "docid": "a4865029148c6803b26d40723c89ff93", "text": "Introduction One of the greatest challenges in cosmetic rhinoplasty is the overly thick nasal skin envelope. In addition to exacerbating unwanted nasal width, thick nasal skin is a major impediment to aesthetic refinement of the nose. Owing to its bulk, noncompliance, and tendency to scar, overly thick skin frequently obscures topographic definition of the nasal framework, thereby limiting or negating cosmetic improvements. Masking of the skeletal contour is usually most evident following aggressive reduction rhinoplasty where overly thick and noncompliant nasal skin fails to shrink and conform to the smaller skeletal framework. The result is excessive subcutaneous dead space leading to further fibrotic thickening of the already bulky nasal covering. Despite the decrease in nasal size, the resulting nasal contour is typically amorphous, ill-defined and devoid of beauty and elegance. To optimize cosmetic results in thick-skinned noses, contour enhancement is best achieved by elongating and projecting the skeletal framework whenever possible (Figure 1). Skeletal augmentation not only reduces dead space to minimize fibrotic thickening, it also stretches and thins the outer soft-tissue covering for improved surface definition. However, in noses in which the nasal framework is already too large, skeletal augmentation is not a viable option, and the overly thick skin envelope must be surgically thinned to achieve better skin contractility and improved cosmetic outcomes. Histologic examination of overly thick nasal tip skin reveals comparatively little dermal thickening or increased adipose content but ratherasubstantial increaseinthicknessofthesubcutaneousfibromuscular tissues.1 Dubbed the “nasal SMAS” layer,2 the fibromuscular tissue layer lies just beneath the subdermal fat and may account for an additional 2 to 3 mm of skin flap thickness. Owing to a discrete dissection plane separating the nasal SMAS layer from the overlying subdermal fat, surgical excision of the hypertrophic nasal SMAS layer can be performed safely in healthy candidates using the external rhinoplasty approach.3 However,theoverlyingsubdermalplexus(containedwithin the subdermal fat) must be carefully protected.3-5 Similarly, inadvertent disruption of the paired lateral nasal arteries—major feeding vessels to the subdermal plexus—must also be avoided, and special care should be exercised when working near the alar crease.3-5 SMAS debulking is also contraindicated in skin less than 3-mm thick because overly aggressive surgical debulking may lead to unsightly prominence of the skeletal topography. However, in the appropriate patient, SMAS debulking can reduce skin envelope thickness by as much as 3.0 mm, with greater reductions common in revision rhinoplasty cases when vascularity permits.6", "title": "" }, { "docid": "b52fb324287ec47860e189062f961ad8", "text": "In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs.", "title": "" }, { "docid": "ee939e08bf4547a863a42bc87c9fdbbd", "text": "Schwartz-Jampel syndrome (SJS) is a heterogeneous autosomal recessive syndrome of myotonia and bone dysplasia. Two types have been recognised: the classical type with late infantile or childhood manifestation and a rarer form with neonatal manifestation. We report five families with a total of 11 children affected with severe neonatal SJS. All presented after birth with skeletal abnormalities and feeding difficulties. Five had the typical pursed appearance of the mouth. Nine died from respiratory complications (five in the neonatal period and four before 2 years of age). One (4 months old) remains hospitalised since birth requiring continuous oxygen supplementation and one (5 months old) requires nasogastric tube feeding and has repeated attacks of aspiration. Only seven of the 17 previously reported neonatal SJS cases had a similar course to the patients in this report. We suggest that within neonatal SJS there is a subgroup which manifests severe respiratory and feeding problems and has a poor prognosis. This report brings the total number of children with neonatal SJS reported from the UAE to 14. This represents the largest review of this syndrome to date from one centre and indicates that this syndrome is fairly common in the population of the UAE.", "title": "" }, { "docid": "452a0765f74fd4301938fb8461cce563", "text": "Falls are the primary cause of accidents among the elderly and frequently cause fatal and non-fatal injuries associated with a large amount of medical costs. Fall detection using wearable wireless sensor nodes has the potential of improving elderly telecare. This investigation proposes a ZigBee-based location-aware fall detection system for elderly telecare that provides an unobstructed communication between the elderly and caregivers when falls happen. The system is based on ZigBee-based sensor networks, and the sensor node consists of a motherboard with a tri-axial accelerometer and a ZigBee module. A wireless sensor node worn on the waist continuously detects fall events and starts an indoor positioning engine as soon as a fall happens. In the fall detection scheme, this study proposes a three-phase threshold-based fall detection algorithm to detect critical and normal falls. The fall alarm can be canceled by pressing and holding the emergency fall button only when a normal fall is detected. On the other hand, there are three phases in the indoor positioning engine: path loss survey phase, Received Signal Strength Indicator (RSSI) collection phase and location calculation phase. Finally, the location of the faller will be calculated by a k-nearest neighbor algorithm with weighted RSSI. The experimental results demonstrate that the fall detection algorithm achieves 95.63% sensitivity, 73.5% specificity, 88.62% accuracy and 88.6% precision. Furthermore, the average error distance for indoor positioning is 1.15 ± 0.54 m. The proposed system successfully delivers critical information to remote telecare providers who can then immediately help a fallen person.", "title": "" }, { "docid": "75e5308959bfed2cf54af052b66798b2", "text": "This article describes a design and implementation of an augmented desk system, named EnhancedDesk, which smoothly integrates paper and digital information on a desk. The system provides users an intelligent environment that automatically retrieves and displays digital information corresponding to the real objects (e.g., books) on the desk by using computer vision. The system also provides users direct manipulation of digital information by using the users' own hands and fingers for more natural and more intuitive interaction. Based on the experiments with our first prototype system, some critical issues on augmented desk systems were identified when trying to pursue rapid and fine recognition of hands and fingers. To overcome these issues, we developed a novel method for realtime finger tracking on an augmented desk system by introducing a infrared camera, pattern matching with normalized correlation, and a pan-tilt camera. We then show an interface prototype on EnhancedDesk. It is an application to a computer-supported learning environment, named Interactive Textbook. The system shows how effective the integration of paper and digital information is and how natural and intuitive direct manipulation of digital information with users' hands and fingers is.", "title": "" }, { "docid": "8beca44b655835e7a33abd8f1f343a6f", "text": "Taxonomies have been developed as a mechanism for cyber attack categorisation. However, when one considers the recent and rapid evolution of attacker techniques and targets, the applicability and effectiveness of these taxonomies should be questioned. This paper applies two approaches to the evaluation of seven taxonomies. The first employs a criteria set, derived through analysis of existing works in which critical components to the creation of taxonomies are defined. The second applies historical attack data to each taxonomy under review, more specifically, attacks in which industrial control systems have been targeted. This combined approach allows for a more in-depth understanding of existing taxonomies to be developed, from both a theoretical and practical perspective.", "title": "" }, { "docid": "8b39fe1fdfdc0426cc1c31ef2c825c58", "text": "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.", "title": "" }, { "docid": "c7c497dd26e5cbdb7e037e4ac83712eb", "text": "A decrease in the abundance and biodiversity of intestinal bacteria within the dominant phylum Firmicutes has been observed repeatedly in Crohn disease (CD) patients. In this study, we determined the composition of the mucosa-associated microbiota of CD patients at the time of surgical resection and 6 months later using FISH analysis. We found that a reduction of a major member of Firmicutes, Faecalibacterium prausnitzii, is associated with a higher risk of postoperative recurrence of ileal CD. A lower proportion of F. prausnitzii on resected ileal Crohn mucosa also was associated with endoscopic recurrence at 6 months. To evaluate the immunomodulatory properties of F. prausnitzii we analyzed the anti-inflammatory effects of F. prausnitzii in both in vitro (cellular models) and in vivo [2,4,6-trinitrobenzenesulphonic acid (TNBS)-induced] colitis in mice. In Caco-2 cells transfected with a reporter gene for NF-kappaB activity, F. prausnitzii had no effect on IL-1beta-induced NF-kappaB activity, whereas the supernatant abolished it. In vitro peripheral blood mononuclear cell stimulation by F. prausnitzii led to significantly lower IL-12 and IFN-gamma production levels and higher secretion of IL-10. Oral administration of either live F. prausnitzii or its supernatant markedly reduced the severity of TNBS colitis and tended to correct the dysbiosis associated with TNBS colitis, as demonstrated by real-time quantitative PCR (qPCR) analysis. F. prausnitzii exhibits anti-inflammatory effects on cellular and TNBS colitis models, partly due to secreted metabolites able to block NF-kappaB activation and IL-8 production. These results suggest that counterbalancing dysbiosis using F. prausnitzii as a probiotic is a promising strategy in CD treatment.", "title": "" } ]
scidocsrr
5ce610deca4dce900d828dbdfb9884e2
Integrating Static and Dynamic Analysis for Detecting Vulnerabilities
[ { "docid": "1f26e35e72c820f9b86b3bb486057015", "text": "During execution, when two or more names exist for the same location at some program point, we call them aliases. In a language which allows arbitrary pointers, the problem of determining aliases at a program point is &rgr;-space-hard [Lan92]. We present an algorithm for the Conditional May Alias problem, which can be used to safely approximate Interprocedural May Alias in the presence of pointers. This algorithm is as precise as possible in the worst case and has been implemented in a prototype analysis tool for C programs. Preliminary speed and precision results are presented.", "title": "" } ]
[ { "docid": "a0437070b667281f6cbb657815d7f5c8", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: a b s t r a c t a r t i c l e i n f o This paper presents a novel approach to visual saliency that relies on a contextually adapted representation produced through adaptive whitening of color and scale features. Unlike previous models, the proposal is grounded on the specific adaptation of the basis of low level features to the statistical structure of the image. Adaptation is achieved through decorrelation and contrast normalization in several steps in a hierarchical approach, in compliance with coarse features described in biological visual systems. Saliency is simply computed as the square of the vector norm in the resulting representation. The performance of the model is compared with several state-of-the-art approaches, in predicting human fixations using three different eye-tracking datasets. Referring this measure to the performance of human priority maps, the model proves to be the only one able to keep the same behavior through different datasets, showing free of biases. Moreover, it is able to predict a wide set of relevant psychophysical observations, to our knowledge, not reproduced together by any other model before. Research on the estimation of visual saliency has experienced an increasing activity in the last years from both computer vision and neuro-science perspectives, giving rise to a number of improved approaches. Furthermore, a wide diversity of applications based on saliency are being proposed that range from image retargeting [1] to human-like robot surveillance [2], object learning and recognition [3–5], objectness definition [6], image processing for retinal implants [7], and many others. Existing approaches to visual saliency have adopted a number of quite different strategies. A first group, including many early models, is very influenced by psychophysical theories supporting a parallel processing of several feature dimensions. Models in this group are particularly concerned with biological plausibility in their formulation, and they resort to the modeling of visual functions. Outstanding examples can be found in [8] or in [9]. Most recent models are in a second group that broadly aims to estimate the inverse of the probability density of a set of low level features by different procedures. In this kind of models, low level features are usually …", "title": "" }, { "docid": "adf0a2cad66a7e48c16f02ef1bc4e9da", "text": "Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.", "title": "" }, { "docid": "b57859a76aea1fb5d4219068bde83283", "text": "Software vulnerabilities are the root cause of a wide range of attacks. Existing vulnerability scanning tools are able to produce a set of suspects. However, they often suffer from a high false positive rate. Convicting a suspect and vindicating false positives are mostly a highly demanding manual process, requiring a certain level of understanding of the software. This limitation significantly thwarts the application of these tools by system administrators or regular users who are concerned about security but lack of understanding of, or even access to, the source code. It is often the case that even developers are reluctant to inspect/fix these numerous suspects unless they are convicted by evidence. In this paper, we propose a lightweight dynamic approach which generates evidence for various security vulnerabilities in software, with the goal of relieving the manual procedure. It is based on data lineage tracing, a technique that associates each execution point precisely with a set of relevant input values. These input values can be mutated by an offline analysis to generate exploits. We overcome the efficiency challenge by using Binary Decision Diagrams (BDD). Our tool successfully generates exploits for all the known vulnerabilities we studied. We also use it to uncover a number of new vulnerabilities, proved by evidence.", "title": "" }, { "docid": "ec4bde3a67cccca41ca3e7af00072f1c", "text": "Single-nucleus RNA sequencing (sNuc-seq) profiles RNA from tissues that are preserved or cannot be dissociated, but it does not provide high throughput. Here, we develop DroNc-seq: massively parallel sNuc-seq with droplet technology. We profile 39,111 nuclei from mouse and human archived brain samples to demonstrate sensitive, efficient, and unbiased classification of cell types, paving the way for systematic charting of cell atlases.", "title": "" }, { "docid": "2531d8d05d262c544a25dbffb7b43d67", "text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.", "title": "" }, { "docid": "4480840e6dbab77e4f032268ea69bff1", "text": "This chapter provides a critical survey of emergence definitions both from a conceptual and formal standpoint. The notions of downward / backward causation and weak / strong emergence are specially discussed, for application to complex social system with cognitive agents. Particular attention is devoted to the formal definitions introduced by (Müller 2004) and (Bonabeau & Dessalles, 1997), which are operative in multi-agent frameworks and make sense from both cognitive and social point of view. A diagrammatic 4-Quadrant approach, allow us to understanding of complex phenomena along both interior/exterior and individual/collective dimension.", "title": "" }, { "docid": "46ab119ffd9850fe1e5ff35b6cda267d", "text": "Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.", "title": "" }, { "docid": "1592dc2c81d9d6b9c58cc1a5b530c923", "text": "We propose a cloudlet network architecture to bring the computing resources from the centralized cloud to the edge. Thus, each User Equipment (UE) can communicate with its Avatar, a software clone located in a cloudlet, and can thus lower the end-to-end (E2E) delay. However, UEs are moving over time, and so the low E2E delay may not be maintained if UEs' Avatars stay in their original cloudlets. Thus, live Avatar migration (i.e., migrating a UE's Avatar to a suitable cloudlet based on the UE's location) is enabled to maintain the low E2E delay between each UE and its Avatar. On the other hand, the migration itself incurs extra overheads in terms of resources of the Avatar, which compromise the performance of applications running in the Avatar. By considering the gain (i.e., the E2E delay reduction) and the cost (i.e., the migration overheads) of the live Avatar migration, we propose a PRofIt Maximization Avatar pLacement (PRIMAL) strategy for the cloudlet network in order to optimize the tradeoff between the migration gain and the migration cost by selectively migrating the Avatars to their optimal locations. Simulation results demonstrate that as compared to the other two strategies (i.e., Follow Me Avatar and Static), PRIMAL maximizes the profit in terms of maintaining the low average E2E delay between UEs and their Avatars and minimizing the migration cost simultaneously.", "title": "" }, { "docid": "5a4267a4ba74480f64d9d01712c54d2e", "text": "We provide a review of the alignment literature in IT, addressing questions such as: What have we learned? What is disputed? Who are contributors to the debate? The article is intended to be useful to faculty and graduate students considering conducting research on alignment, instructors preparing lectures, and practitioners seeking to assess the ‘state-of-play’. It is both informational and provocative. Challenges to the value of alignment research, divergent views, and new perspectives on alignment are presented. It is hoped that the article will spark helpful conversation on the merits of continued investigation of IT alignment. Journal of Information Technology (2007) 22, 297–315. doi:10.1057/palgrave.jit.2000109 Published online 18 September 2007", "title": "" }, { "docid": "0e30b5ffa34b9a065130688f0b7e44da", "text": "This brief presents a new technique for minimizing reference spurs in a charge-pump phase-locked loop (PLL) while maintaining dead-zone-free operation. The proposed circuitry uses a phase/frequency detector with a variable delay element in its reset path, with the delay length controlled by feedback from the charge-pump. Simulations have been performed with several PLLs to compare the proposed circuitry with previously reported techniques. The proposed approach shows improvements over previously reported techniques of 12 and 16 dB in the two closest reference spurs", "title": "" }, { "docid": "b4e676d4d11039c5c5feb5e549eb364f", "text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access", "title": "" }, { "docid": "e315a7e8e83c4130f9a53dec21598ae6", "text": "Modern techniques for data analysis and machine learning are so called kernel methods. The most famous and successful one is represented by the support vector machine (SVM) for classification or regression tasks. Further examples are kernel principal component analysis for feature extraction or other linear classifiers like the kernel perceptron. The fundamental ingredient in these methods is the choice of a kernel function, which computes a similarity measure between two input objects. For good generalization abilities of a learning algorithm it is indispensable to incorporate problem-specific a-priori knowledge into the learning process. The kernel function is an important element for this. This thesis focusses on a certain kind of a-priori knowledge namely transformation knowledge. This comprises explicit knowledge of pattern variations that do not or only slightly change the pattern’s inherent meaning e.g. rigid movements of 2D/3D objects or transformations like slight stretching, shifting, rotation of characters in optical character recognition etc. Several methods for incorporating such knowledge in kernel functions are presented and investigated. 1. Invariant distance substitution kernels (IDS-kernels): In many practical questions the transformations are implicitly captured by sophisticated distance measures between objects. Examples are nonlinear deformation models between images. Here an explicit parameterization would require an arbitrary number of parameters. Such distances can be incorporated in distanceand inner-product-based kernels. 2. Tangent distance kernels (TD-kernels): Specific instances of IDS-kernels are investigated in more detail as these can be efficiently computed. We assume differentiable transformations of the patterns. Given such knowledge, one can construct linear approximations of the transformation manifolds and use these efficiently for kernel construction by suitable distance functions. 3. Transformation integration kernels (TI-kernels): The technique of integration over transformation groups for feature extraction can be extended to kernel functions and more general group, non-group, discrete or continuous transformations in a suitable way. Theoretically, these approaches differ in the way the transformations are represented and in the adjustability of the transformation extent. More fundamentally, kernels from category 3 turn out to be positive definite, kernels of types 1 and 2 are not positive definite, which is generally required for being usable in kernel methods. This is the", "title": "" }, { "docid": "fae6fbf1c80a255cb597da00d1dcb98d", "text": "Initially known as venous angioma, Developmental Venous Anomaly (DVA) is a persistent variant of fetal brain venous drainage [1, 2]. Unlike cerebellar DVA with or without brainstem extension, isolated pontine DVA is a rare condition [3, 4]; and as per our knowledge, its presentation with exclusive transpontine drainage has never been described before. The authors present the case of an isolated pontine DVA with a selective transpontine drainage discovered incidentally; the clinical presentation, the radiological findings, and the management are discussed.", "title": "" }, { "docid": "9f1a1fdd9e6bc888abb14827d43d1980", "text": "In recent years, many variance reduced algorithms for empirical risk minimization have been introduced. In contrast to vanilla SGD, these methods converge linearly on strong convex problems. To obtain the variance reduction, current methods either require frequent passes over the full data to recompute gradients—without making any progress during this time (like in SVRG), or they require memory of the same size as the input problem (like SAGA). In this work, we propose k-SVRG, an algorithm that interpolates between those two extremes: it makes best use of the available memory and in turn does avoid full passes over the data without making progress. We prove linear convergence of k-SVRG on strongly convex problems and convergence to stationary points on non-convex problems. Numerical experiments show the effectiveness of our method.", "title": "" }, { "docid": "b99bda313e172cb7ef169e771bfcff89", "text": "This paper presents an optimal design of a parallel manipulator aiming to perform pick-and-place operations at high speed and high acceleration. After reviewing existing architectures of high-speed and high-acceleration parallel manipulators, a new design of a 4-DOF parallel manipulator is presented, with an articulated traveling plate, which is free of internal singularities and is able to achieve high performances. The kinematic and simplified, but realistic, dynamic models are derived and validated on a manipulator prototype. Experimental tests show that this design is able to perform beyond the high targets, i.e., it reaches a speed of 5.5 m/s and an acceleration of 165 m/s2. The experimental prototype was further optimized on the basis of kinematic and dynamic criteria. Once the motors, gear ratio, and several link lengths are determined, a modified design of the articulated traveling plate is proposed in order to reach a better dynamic equilibrium among the four legs of the manipulator. The obtained design is the basis of a commercial product offering the shortest cycle times among all robots available in today's market.", "title": "" }, { "docid": "d4cd0dabcf4caa22ad92fab40844c786", "text": "NA", "title": "" }, { "docid": "5f990b79e589b5795ad5d76bdfc5d15b", "text": "Using the ECG analog front-end and ARM Cortex-M3 processor to develop a portable ECG monitor. The STM32 as the core unit, the ADS1292 as the acquisition analog front-end, it also includes a touch screen display module, an SD card storage module and a voltage conversion module. Automatic ECG analysis algorithms including QRS complex detection, QRS width detection and ST segment detection. ECG can be divided into four kinds of heart beat and eight kinds of arrhythmia rhythm using the extracted ECG parameters. The results have been evaluated on the MIT-BIH Arrhythmia Database, the sensitivity of QRS complex detection was 99% and the sensitivity of heart beat classification was above 95%. The monitor can display the real-time ECG waveform and the current heart rate, to make recommendations for the subjects, and it stored the abnormal ECG waveform that provided to physicians for further analysis and diagnosis. Ultimately the monitor gives a composite score based on heart rate, arrhythmia and ST segment to facilitate subjects for heart health.", "title": "" }, { "docid": "08b22a3e6ad847bedaf057e10fdd5f3c", "text": "In this paper we propose a technique to adapt convolutional neural network (CNN) based object detectors trained on RGB images to effectively leverage depth images at test time to boost detection performance. Given labeled depth images for a handful of categories we adapt an RGB object detector for a new category such that it can now use depth images in addition to RGB images at test time to produce more accurate detections. Our approach is built upon the observation that lower layers of a CNN are largely task and category agnostic and domain specific while higher layers are largely task and category specific while being domain agnostic. We operationalize this observation by proposing a mid-level fusion of RGB and depth CNNs. Experimental evaluation on the challenging NYUD2 dataset shows that our proposed adaptation technique results in an average 21% relative improvement in detection performance over an RGB-only baseline even when no depth training data is available for the particular category evaluated. We believe our proposed technique will extend advances made in computer vision to RGB-D data leading to improvements in performance at little additional annotation effort.", "title": "" }, { "docid": "218e80c55d0d184b5c699b3df7d3377d", "text": "In the state-of-the-art video-based smoke detection methods, the representation of smoke mainly depends on the visual information in the current image frame. In the case of light smoke, the original background can be still seen and may deteriorate the characterization of smoke. The core idea of this paper is to demonstrate the superiority of using smoke component for smoke detection. In order to obtain smoke component, a blended image model is constructed, which basically is a linear combination of background and smoke components. Smoke opacity which represents a weighting of the smoke component is also defined. Based on this model, an optimization problem is posed. An algorithm is devised to solve for smoke opacity and smoke component, given an input image and the background. The resulting smoke opacity and smoke component are then used to perform the smoke detection task. The experimental results on both synthesized and real image data verify the effectiveness of the proposed method.", "title": "" }, { "docid": "87d885bf255c43bff0efdee8f89f0e2b", "text": "Enabling individuals who are living with reduced mobility of the hand to utilize portable exoskeletons at home has the potential to deliver rehabilitation therapies with a greater intensity and relevance to activities of daily living. Various hand exoskeleton designs have been explored in the past, however, devices have remained nonportable and cumbersome for the intended users. Here we investigate a remote actuation system for wearable hand exoskeletons, which moves weight from the weakened limb to the shoulders, reducing the burden on the user and improving portability. A push-pull Bowden cable was used to transmit actuator forces from a backpack to the hand with strict attention paid to total system weight, size, and the needs of the target population. We present the design and integration of this system into a previously presented hand exoskeleton, as well as its characterization. Integration of remote actuation reduced the exoskeleton weight by 56% to 113g without adverse effects to functionality. Total actuation system weight was kept to 754g. The loss of positional accuracy inherent with Bowden cable transmissions was compensated for through closed loop positional control of the transmission output. The achieved weight reduction makes hand exoskeletons more suitable to the intended user, which will permit the study of their effectiveness in providing long duration, high intensity, and targeted rehabilitation as well as functional assistance.", "title": "" } ]
scidocsrr
0a50e5865f33d51565b0d9a510adad51
Polyglot Semantic Parsing in APIs
[ { "docid": "1acc97afa9facf77289ddf1015b1e110", "text": "This short note presents a new formal language, lambda dependency-based compositional semantics (lambda DCS) for representing logical forms in semantic parsing. By eliminating variables and making existential quantification implicit, lambda DCS logical forms are generally more compact than those in lambda calculus.", "title": "" } ]
[ { "docid": "d3e409b074c4c26eb208b27b7b58a928", "text": "The increase in concern for carbon emission and reduction in natural resources for conventional power generation, the renewable energy based generation such as Wind, Photovoltaic (PV), and Fuel cell has gained importance. Out of which the PV based generation has gained significance due to availability of abundant sunlight. As the Solar power conversion is a low efficient conversion process, accurate and reliable, modeling of solar cell is important. Due to the non-linear nature of diode based PV model, the accurate design of PV cell is a difficult task. A built-in model of PV cell is available in Simscape, Simelectronics library, Matlab. The equivalent circuit parameters have to be computed from data sheet and incorporated into the model. However it acts as a stiff source when implemented with a MPPT controller. Henceforth, to overcome this drawback, in this paper a two-diode model of PV cell is implemented in Matlab Simulink with reduced four required parameters along with similar configuration of the built-in model. This model allows incorporation of MPPT controller. The I-V and P-V characteristics of these two models are investigated under different insolation levels. A PV based generation system feeding a DC load is designed and investigated using these two models and further implemented with MPPT based on P&O technique.", "title": "" }, { "docid": "1f761f84ab93e94054890c936426642f", "text": "Information systems in general, and business processes in particular, generate a wealth of information in the form of event traces or logs. The analysis of these logs, either offline or in real-time, can be put to numerous uses: computation of various statistics, detection of anomalous patterns or compliance violations of some form of contract. However, current solutions for Complex Event Processing (CEP) generally offer only a restricted set of predefined queries on traces, and otherwise require a user to write procedural code to compute custom queries. In this paper, we present a formal and declarative language for the manipulation of event traces.", "title": "" }, { "docid": "0b71777f8b4d03fb147ff41d1224136e", "text": "Mobile broadband demand keeps growing at an overwhelming pace. Though emerging wireless technologies will provide more bandwidth, the increase in demand may easily consume the extra bandwidth. To alleviate this problem, we propose using the content available on individual devices as caches. Particularly, when a user reaches areas with dense clusters of mobile devices, \"data spots\", the operator can instruct the user to connect with other users sharing similar interests and serve the requests locally. This paper presents feasibility study as well as prototype implementation of this idea.", "title": "" }, { "docid": "07ce1301392e18c1426fd90507dc763f", "text": "The fluorescent lamp lifetime is very dependent of the start-up lamp conditions. The lamp filament current and temperature during warm-up and at steady-state operation are important to extend the life of a hot-cathode fluorescent lamp, and the preheating circuit is responsible for attending to the start-up lamp requirements. The usual solution for the preheating circuit used in self-oscillating electronic ballasts is simple and presents a low cost. However, the performance to extend the lamp lifetime is not the most effective. This paper presents an effective preheating circuit for self-oscillating electronic ballasts as an alternative to the usual solution.", "title": "" }, { "docid": "9fc0624b95dbd4db2a510605be2f6508", "text": "ISO 9241-11 and ISO 13407 are two important standards related to usability: the former one provides the definition of usability and the latter one guidance for designing usability. We carried out an interpretative analysis of ISO 13407 from the viewpoint of the standard definition of usability from ISO 9241-11. The results show that ISO 13407 provides only partly guidance for designing usability as presumed by the definition. Guidance for describing users and environments are provided but very limited guidance is provided for the descriptions of user goals and usability measures, and generally for the process of producing the various outcomes.", "title": "" }, { "docid": "c9b568ea5553e0364d8f682b6584eb52", "text": "Photoplethysmograph (PPG) is a simple and cost effective method to assess cardiovascular related parameters such as heart rate, arterial blood oxygen saturation and blood pressure. PPG signal consists of not only synchronized heart beats, but also the rhythms of respiration. The PPG sensor, which consists of infrared light-emitting diodes (LEDs) and a photodetector, allows a simple, reliable and low-cost means of monitoring the pulse rate. In this project, PPG signals are acquired through a customized data acquisition process using Arduino board to control the pulse circuit and to obtain the PPG signals from human subjects. Using signal processing techniques, including filters, peak detections, wavelet transform analysis and power spectral density, the heart rate (HR) and breathing rate (BR) are be obtained simultaneously. Estimations of HR and BR are conducted using MATLAB algorithm developed based on the wavelet decomposition techniques to extract the heart and respiration activities from the PPG signals. The values of HR and BR obtained using the algorithm are similar to the values obtained by manual estimation for seven sample subjects where the range of percentage errors are small about 0–9.5% for the breathing rate and 2.1–5.7% for the heart rate.", "title": "" }, { "docid": "2af262d6dda0e4de4abbc593a828326a", "text": "We investigate strategies for selection of databases and instances for training cross-corpus emotion recognition systems, that is, systems that generalize across different labelling concepts, languages and interaction scenarios. We propose objective measures for prototypicality based on distances in a large space of brute-forced acoustic features and show their relation to the expected performance in cross-corpus testing. We perform extensive evaluation on eight commonly used corpora of emotional speech reaching from acted to fully natural emotion and limited phonetic content to conversational speech. In the result, selecting prototypical training instances by the proposed criterion can deliver a gain of up to 7.5 % unweighted accuracy in cross-corpus arousal recognition, and there is a correlation of .571 between the proposed prototypicality measure of databases and the expected unweighted accuracy in cross-corpus testing by Support Vector Machines.", "title": "" }, { "docid": "9aab4a607de019226e9465981b82f9b8", "text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.", "title": "" }, { "docid": "ea8c0a7516b180a6a542a852b62e6497", "text": "Genetic growth curves of boars in a test station were predicted on daily weight records collected by automated weighing scales. The data contained 121 865 observations from 1477 Norwegian Landrace boars and 108 589 observations from 1300 Norwegian Duroc boars. Random regression models using Legendre polynomials up to second order for weight at different ages were compared for best predicting ability and Bayesian information criterion (BIC) for both breeds. The model with second-order polynomials had best predictive ability and BIC. The heritability for weight, based on this model, was found to vary along the growth trajectory between 0.32-0.35 for Duroc and 0.17-0.25 for Landrace. By varying test length possibility to use shorter test time and pre-selection was tested. Test length was varied and compared with average termination at 100 kg, termination of the test at 90 kg gives, e.g. 2% reduction in accuracy of estimated breeding values (EBV) for both breeds and termination at 80 kg gives 5% reduction in accuracy of EBVs for Landrace and 3% for Duroc. A shorter test period can decrease test costs per boar, but also gives possibilities to increase selection intensity as there will be room for testing more boars.", "title": "" }, { "docid": "a4d4a06d3e84183eddf7de6c0fd2721b", "text": "Reinforcement learning (RL) is a powerful paradigm for sequential decision-making under uncertainties, and most RL algorithms aim to maximize some numerical value which represents only one long-term objective. However, multiple long-term objectives are exhibited in many real-world decision and control systems, so recently there has been growing interest in solving multiobjective reinforcement learning (MORL) problems where there are multiple conflicting objectives. The aim of this paper is to present a comprehensive overview of MORL. The basic architecture, research topics, and naïve solutions of MORL are introduced at first. Then, several representative MORL approaches and some important directions of recent research are comprehensively reviewed. The relationships between MORL and other related research are also discussed, which include multiobjective optimization, hierarchical RL, and multiagent RL. Moreover, research challenges and open problems of MORL techniques are suggested.", "title": "" }, { "docid": "8edc51b371d7551f9f7e69149cd4ece0", "text": "Though many previous studies has proved the importance of trust from various perspectives, the researches about online consumer’s trust are fragmented in nature and still it need more attention from academics. Lack of consumers trust in online systems is a critical impediment to the success of e-Commerce. Therefore it is important to explore the critical factors that affect the formation of user’s trust in online environments. The main objective of this paper is to analyze the effects of various antecedents of online trust and to predict the user’s intention to engage in online transaction based on their trust in the Information systems. This study is conducted among Asian online consumers and later the results were compared with those from Non-Asian regions. Another objective of this paper is to integrate De Lone and McLean model of IS Success and Technology Acceptance Model (TAM) for measuring the significance of online trust in e-Commerce adoption. The results of this study show that perceived security, perceived privacy, vendor familiarity, system quality and service quality are the significant antecedents of online trust in a B2C e-Commerce context.", "title": "" }, { "docid": "5ca75490c015685a1fc670b2ee5103ff", "text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.", "title": "" }, { "docid": "0dd1b31d778d30644ce405032729ad7a", "text": "In order to save the cost and energy for PV system testing, a high efficiency solar array simulator (SAS) implemented by an LLC resonant DC/DC converter is proposed. This converter has zero voltage switching (ZVS) operation of the primary switches and zero current switching (ZCS) operation of the rectifier diodes. By frequency modulation control, the output impedance of an LLC converter can be regulated from zero to infinite without shunt or series resistor; hence, the efficiency of the proposed SAS can be significantly increased. According to the provided operation principles and design considerations of an LLC converter, a prototype is implemented to demonstrate the feasibility of the proposed SAS.", "title": "" }, { "docid": "ce21a811ea260699c18421d99221a9f2", "text": "Medical image processing is the most challenging and emerging field now a day’s processing of MRI images is one of the parts of this field. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. This is a computer aided diagnosis systems for detecting malignant texture in biological study. This paper presents an approach in computer-aided diagnosis for early prediction of brain cancer using Texture features and neuro classification logic. This paper describes the proposed strategy for detection; extraction and classification of brain tumour from MRI scan images of brain; which incorporates segmentation and morphological functions which are the basic functions of image processing. Here we detect the tumour, segment the tumour and we calculate the area of the tumour. Severity of the disease can be known, through classes of brain tumour which is done through neuro fuzzy classifier and creating a user friendly environment using GUI in MATLAB. In this paper cases of 10 patients is taken and severity of disease is shown and different features of images are calculated.", "title": "" }, { "docid": "c9e2d6922436a70e4ab0f7d4f3133f55", "text": "The inverse kinematics problem of robot manipulators is solved analytically in order to have complete and simple solutions to the problem. This approach is also called as a closed form solution of robot inverse kinematics problem. In this paper, the inverse kinematics of sixteen industrial robot manipulators classified by Huang and Milenkovic were solved in closed form. Each robot manipulator has an Euler wrist whose three axes intersect at a common point. Basically, five trigonometric equations were used to solve the inverse kinematics problems. Robot manipulators can be mainly divided into four different group based on the joint structure. In this work, the inverse kinematics solutions of SN (cylindrical robot with dome), CS (cylindrical robot), NR (articulated robot) and CC (selectively compliant assembly robot arm-SCARA, Type 2) robot manipulator belonging to each group mentioned above are given as an example. The number of the inverse kinematics solutions for the other robot manipulator was also summarized in a table.", "title": "" }, { "docid": "af836023436eaa65ef55f9928312e73f", "text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.", "title": "" }, { "docid": "2d5b476642b65c881558821fe6dc9e03", "text": "In this paper we propose a real solution for gathering information throughout the entire pig meat supply chain. The architecture consists of a a complex identification system based on RFID tags that transmits data to a distributed database during all phases of the production process. The specific work environment required identifying a suitable technology for implementation in the supply chain and the best possible organization. The aim of this work is to keep track of all the information generated during meat processing, not only for traceability purposes but chiefly for enhancing and optimizing production. All information generated by the traceability system will be collected in a central database accessible by end users thtough a public dedicated web interface.", "title": "" }, { "docid": "c41678d57f0f44b7c834e56585456ded", "text": "Movie and TV subtitles contain large amounts of conversational material, but lack an explicit turn structure. This paper present a data-driven approach to the segmentation of subtitles into dialogue turns. Training data is first extracted by aligning subtitles with transcripts in order to obtain speaker labels. This data is then used to build a classifier whose task is to determine whether two consecutive sentences are part of the same dialogue turn. The approach relies on linguistic, visual and timing features extracted from the subtitles themselves and does not require access to the audiovisual material - although speaker diarization can be exploited when audio data is available. The approach also exploits alignments with related subtitles in other languages to further improve the classification performance. The classifier achieves an accuracy of 78 % on a held-out test set. A follow-up annotation experiment demonstrates that this task is also difficult for human annotators.", "title": "" }, { "docid": "8d79dbc2108a8657915338d4781d18e5", "text": "NMDA receptors (NMDARs) are glutamate-gated ion channels that are present at most excitatory mammalian synapses. The four GluN2 subunits (GluN2A–D) contribute to four diheteromeric NMDAR subtypes that have divergent physiological and pathological roles. Channel properties that are fundamental to NMDAR function vary among subtypes. We investigated the amino acid residues responsible for variations in channel properties by creating and examining NMDARs containing mutant GluN2 subunits. We found that the NMDAR subtype specificity of three crucial channel properties, Mg2+ block, selective permeability to Ca2+ and single-channel conductance, were all controlled primarily by the residue at a single GluN2 site in the M3 transmembrane region. Mutant cycle analysis guided by molecular modeling revealed that a GluN2-GluN1 subunit interaction mediates the site's effects. We conclude that a single GluN2 subunit residue couples with the pore-forming loop of the GluN1 subunit to create naturally occurring variations in NMDAR properties that are critical to synaptic plasticity and learning.", "title": "" } ]
scidocsrr
73bd8719b3e56451ea6f2dcf7b4aa77f
View synthesis by trinocular edge matching and transfer
[ { "docid": "b29947243b1ad21b0529a6dd8ef3c529", "text": "We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border.", "title": "" }, { "docid": "5aa5ebf7727ea1b5dcf4d8f74b13cb29", "text": "Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper, we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. IfJLk{M1,.” .Mk} is the set of pictures representing a given object and P is the 2-D image of an object to be recognized, then P is considered to be an instance of M if P= C~=,aiMi for some constants (pi. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries and can also handle nonrigid transformations. The paper is divided into two parts. In the first part, we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part, we suggest how this linear combination property may be used in the recognition process.", "title": "" } ]
[ { "docid": "40839b46d8e5593d6c59e04cd5ec2316", "text": "The main focus of image mining is concerned with the classification of brain tumor in the CT scan brain images. The major steps involved in the system are: pre-processing, feature extraction, association rule mining and classification. Here, we present some experiments for tumor detection in MRI images. The pre-processing step has been done using the median filtering process and features have been extracted using texture feature extraction technique. The extracted features from the CT scan images are used to mine the association rules. The proposed method is used to classify the medical images for diagnosis. In this system we are going to use Decision Tree classification algorithm. The proposed method improves the efficiency than the traditional image mining methods. Here, results which we get are compared with Naive Bayesian classification algorithm.", "title": "" }, { "docid": "7c2c2dc6ba53f08f48a2b45672f0002d", "text": "In past few decades, development in the area power electronics which increases the demand for high performance industrial applications has contributed to rapid developments in digital motor control. High efficiency, reduced noise, extended reliability at optimum cost is the challenge facing by many industries which uses the electric motors. Now days, the demand of electronic motor control is increases rapidly, not only in the area of automotive, computer peripherals, but also in industrial, electrical applications. All these applications need cost-effective solutions without compromising reliability. The purpose of this system is to design, simulate and implement the most feasible motor control for use in industrial and electrical applications. The proposed design describes the designing and development of a three phase induction motor drive with speed sensing. It is based on PIC18F4431 microcontroller which is dedicated for motor control applications. The designed drive is a low cost motor control drive used for medium power three phase induction motor and is targeted for industrial and electric appliances e.g. washing machines, compressors, air conditioning units, electric pumps and some simple industrial drives. The designed motor drive has another advantage that it would converts single phase into three phases supply where three phase motors are operated on a single phase supply. So it is the best option for those applications where three phase supply is not available. In such applications, three phase motors are preferred because they are efficient, economical and require less severe starting current. This paper deals with PWM technique used for speed control of three phase induction motor using single phase supply with PIC microcontroller. The implemented system converts a single phase AC input into high DC. The high DC is converted into three phase AC voltage by using inverter circuit. The desired AC voltage can be obtained by changing the switching time of MOSFET’s using PWM signals. These PWM signals are generated by the PIC microcontroller. Different PWM schemes are used for firing 1. Ph.D. student, Department of Electronics, Shankarrao Mohite Mahavidyalaya, Akluj, Solapur. bnjamadar@yahoo.co.in 2. Department of Electronics, Willingdon College, Sangli. srkumbhar@yahoo.co.in 3. Department of Electronics, D.B.F. Dayanand College of Arts and Science, Solapur. dssutrave@gmail.com of MOSFET’s and harmonic profiles are recorded through simulation. Out of them, best PWM firing scheme is used for the better efficiency. Speed variation of the induction motor is then recorded by changing duty cycle of the firing pulse of an inverter.", "title": "" }, { "docid": "7dc0be689a4c58f4bc6ee0624605df81", "text": "Oil spills represent a major threat to ocean ecosystems and their health. Illicit pollution requires continuous monitoring and satellite remote sensing technology represents an attractive option for operational oil spill detection. Previous studies have shown that active microwave satellite sensors, particularly Synthetic Aperture Radar (SAR) can be effectively used for the detection and classification of oil spills. Oil spills appear as dark spots in SAR images. However, similar dark spots may arise from a range of unrelated meteorological and oceanographic phenomena, resulting in misidentification. A major focus of research in this area is the development of algorithms to distinguish oil spills from `look-alikes'. This paper describes the development of a new approach to SAR oil spill detection employing two different Artificial Neural Networks (ANN), used in sequence. The first ANN segments a SAR image to identify pixels belonging to candidate oil spill features. A set of statistical feature parameters are then extracted and used to drive a second ANN which classifies objects into oil spills or look-alikes. The proposed algorithm was trained using 97 ERS-2 SAR and ENVSAT ASAR images of individual verified oil spills or/and look-alikes. The algorithm was validated using a large dataset comprising full-swath images and correctly identified 91.6% of reported oil spills and 98.3% of look-alike phenomena. The segmentation stage of the new technique outperformed the established edge detection and adaptive thresholding approaches. An analysis of feature descriptors highlighted the importance of image gradient information in the classification stage.", "title": "" }, { "docid": "7de95df8465d041cbb8000ae237174f6", "text": "Executive Summary The volume of research on learning and instruction is enormous. Yet progress in improving educational outcomes has been slow at best. Many learning science results have not been translated into general practice and it appears that most that have been fielded have not yielded significant results in randomized control trials. Addressing the chasm between learning science and educational practice will require massive efforts from many constituencies, but one of these efforts is to develop a theoretical framework that permits a more systematic accumulation of the relevant research base.", "title": "" }, { "docid": "af29a155a5afdb5b1a0c055d1fcb8f32", "text": "The cyclic redundancy check (CRC) is a popular error detection code (EDC) used in many digital transmission and storage protocols. Most existing digit-serial hardware CRC computation architectures are based on one of the two well-known bit-serial CRC linear feedback shift register (LFSR) architectures. In this paper, we present and investigate a generalized CRC formulation that incorporates negative degree terms. Through software simulations, we identify useful formulations that result in reduced time and/or area complexity CRC circuits compared to the existing non-retimed approaches. Implementation results on an Altera field-programmable gate array (FPGA) device are reported. We conclude that the proposed approach is most effective when the digit size is greater than the generator polynomial degree.", "title": "" }, { "docid": "ebc8966779ba3b9e6a768f4c462093f5", "text": "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.", "title": "" }, { "docid": "7e683f15580e77b1e207731bb73b8107", "text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "cab874a37c348491c85bfacb46d669b8", "text": "Recent advances in meta-learning are providing the foundations to construct meta-learning assistants and task-adaptive learners. The goal of this special issue is to foster an interest in meta-learning by compiling representative work in the field. The contributions to this special issue provide strong insights into the construction of future meta-learning tools. In this introduction we present a common frame of reference to address work in meta-learning through the concept of meta-knowledge. We show how meta-learning can be simply defined as the process of exploiting knowledge about learning that enables us to understand and improve the performance of learning algorithms.", "title": "" }, { "docid": "657614eba108bd1e58315299ac29ee7f", "text": "In this research, an intelligent system is designed between the user and the database system which accepts natural language input and then converts it into an SQL query. The research focuses on incorporating complex queries along with simple queries irrespective of the database. The system accommodates aggregate functions, multiple conditions in WHERE clause, advanced clauses like ORDER BY, GROUP BY and HAVING. The system handles single sentence natural language inputs, which are with respect to selected database. The research currently concentrates on MySQL database system. The natural language statement goes through various stages of Natural Language Processing like morphological, lexical, syntactic and semantic analysis resulting in SQL query formation.", "title": "" }, { "docid": "bbe59dd74c554d92167f42701a1f8c3d", "text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.", "title": "" }, { "docid": "36cb5fa9af45fcd34d6c1114d6cd9be5", "text": "The quality of requirements is typically considered as an important factor for the quality of the end product. For traditional up-front requirements specifications, a number of standards have been defined on what constitutes good quality : Requirements should be complete, unambiguous, specific, time-bounded, consistent, etc. For agile requirements specifications, no new standards have been defined yet, and it is not clear yet whether traditional quality criteria still apply. To investigate what quality criteria for assessing the correctness of written agile requirements exist, we have conducted a systematic literature review. The review resulted in a list of 16 selected papers on this topic. These selected papers describe 28 different quality criteria for agile requirements specifications. We categorize and analyze these criteria and compare them with those from traditional requirements engineering. We discuss findings from the 16 papers in the form of recommendations for practitioners on quality assessment of agile requirements. At the same time, we indicate the open points in the form of a research agenda for researchers working on this topic .", "title": "" }, { "docid": "7c097c95fb50750c082877ab7e277cd9", "text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.", "title": "" }, { "docid": "28d04da9c3a5183ab3e02cd46c53caac", "text": "We introduce a new technique that automatically generates diverse, visually compelling stylizations for a photograph in an unsupervised manner. We achieve this by learning style ranking for a given input using a large photo collection and selecting a diverse subset of matching styles for final style transfer. We also propose an improved technique that transfers the global color and tone of the chosen exemplars to the input photograph while avoiding the common visual artifacts produced by the existing style transfer methods. Together, our style selection and transfer techniques produce compelling, artifact-free results on a wide range of input photographs, and a user study shows that our results are preferred over other techniques.", "title": "" }, { "docid": "64c9a3da19efc8fa29ae648e0cc13138", "text": "Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.", "title": "" }, { "docid": "a33f962c4a6ea61d3400ca9feea50bd7", "text": "Now, we come to offer you the right catalogues of book to open. artificial intelligence techniques for rational decision making is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" }, { "docid": "651ef4a9b381441a643f39cfd9799eef", "text": "Increasing levels of wind generation has resulted in an urgent need for the assessment of their impact on frequency control of power systems. Whereas increased system inertia is intrinsically linked to the addition of synchronous generation to power systems, due to differing electromechanical characteristics, this inherent link is not present in wind turbine generators. Regardless of wind turbine technology, the displacement of conventional generation with wind will result in increased rates of change of system frequency. The magnitude of the frequency excursion following a loss of generation may also increase. Amendment of reserve policies or modification of wind turbine inertial response characteristics may be necessary to facilitate increased levels of wind generation. This is particularly true in small isolated power systems.", "title": "" }, { "docid": "8aca909e0f83a8ac917a453fdcc73b6f", "text": "Nearly half a century ago, military organizations introduced “Tempest” emission-security test standards to control information leakage from unintentional electromagnetic emanations of digital electronics. The nature of these emissions has changed with evolving technology; electromechanic devices have vanished and signal frequencies increased several orders of magnitude. Recently published eavesdropping attacks on modern flat-panel displays and cryptographic coprocessors demonstrate that the risk remains acute for applications with high protection requirements. The ultra-wideband signal processing technology needed for practical attacks finds already its way into consumer electronics. Current civilian RFI limits are entirely unsuited for emission security purposes. Only an openly available set of test standards based on published criteria will help civilian vendors and users to estimate and manage emission-security risks appropriately. This paper outlines a proposal and rationale for civilian electromagnetic emission-security limits. While the presented discussion aims specifically at far-field video eavesdropping in the VHF and UHF bands, the most easy to demonstrate risk, much of the presented approach for setting test limits could be adapted equally to address other RF emanation risks.", "title": "" }, { "docid": "b6d655df161d6c47675e9cb17173a521", "text": "Nigeria is considered as one of the many countries in sub-Saharan Africa with a weak economy and gross deficiencies in technology and engineering. Available data from international monitoring and regulatory organizations show that technology is pivotal to determining the economic strengths of nations all over the world. Education is critical to technology acquisition, development, dissemination and adaptation. Thus, this paper seeks to critically assess and discuss issues and challenges facing technological advancement in Nigeria, particularly in the education sector, and also proffers solutions to resuscitate the Nigerian education system towards achieving national technological and economic sustainability such that Nigeria can compete favourably with other technologicallydriven economies of the world in the not-too-distant future. Keywords—Economically weak countries, education, globalization and competition, technological advancement.", "title": "" }, { "docid": "d1069c06341e484e7f3b5ab7a4a49a2d", "text": "In a \"nutrition transition\", the consumption of foods high in fats and sweeteners is increasing throughout the developing world. The transition, implicated in the rapid rise of obesity and diet-related chronic diseases worldwide, is rooted in the processes of globalization. Globalization affects the nature of agri-food systems, thereby altering the quantity, type, cost and desirability of foods available for consumption. Understanding the links between globalization and the nutrition transition is therefore necessary to help policy makers develop policies, including food policies, for addressing the global burden of chronic disease. While the subject has been much discussed, tracing the specific pathways between globalization and dietary change remains a challenge. To help address this challenge, this paper explores how one of the central mechanisms of globalization, the integration of the global marketplace, is affecting the specific diet patterns. Focusing on middle-income countries, it highlights the importance of three major processes of market integration: (I) production and trade of agricultural goods; (II) foreign direct investment in food processing and retailing; and (III) global food advertising and promotion. The paper reveals how specific policies implemented to advance the globalization agenda account in part for some recent trends in the global diet. Agricultural production and trade policies have enabled more vegetable oil consumption; policies on foreign direct investment have facilitated higher consumption of highly-processed foods, as has global food marketing. These dietary outcomes also reflect the socioeconomic and cultural context in which these policies are operating. An important finding is that the dynamic, competitive forces unleashed as a result of global market integration facilitates not only convergence in consumption habits (as is commonly assumed in the \"Coca-Colonization\" hypothesis), but adaptation to products targeted at different niche markets. This convergence-divergence duality raises the policy concern that globalization will exacerbate uneven dietary development between rich and poor. As high-income groups in developing countries accrue the benefits of a more dynamic marketplace, lower-income groups may well experience convergence towards poor quality obseogenic diets, as observed in western countries. Global economic policies concerning agriculture, trade, investment and marketing affect what the world eats. They are therefore also global food and health policies. Health policy makers should pay greater attention to these policies in order to address some of the structural causes of obesity and diet-related chronic diseases worldwide, especially among the groups of low socioeconomic status.", "title": "" }, { "docid": "8d0c470056d3f5854419cc4ec0327a99", "text": "Aquaponics is the science of integrating intensive fish aquaculture with plant production in recirculating water systems. Although ion waste production by f ish cannot satisfy all plant requirements, less is known about the relationship between total feed provided for f ish and the production of milliequivalents (mEq) of different macronutrients for plants, especially for nutrient flow hydroponics used for strawberry production in Spain. That knowledge is essential to consider the amount of macronutrients available in aquaculture systems so that farmers can estimate how much nutrient needs to be supplemented in the waste water from fish, to produce viable plant growth. In the present experiment, tilapia (Oreochromis niloticus L.) were grown in a small-scale recirculating system at two different densities while growth and feed consumption were noted every week for five weeks. At the same time points, water samples were taken to measure pH, EC25, HCO3, Cl, NH4, NO2, NO3, H2PO4, SO4, Na, K, Ca and Mg build up. The total increase in mEq of each ion per kg of feed provided to the fish was highest for NO3, followed, in decreasing order, by Ca, H2PO4, K, Mg and SO4. The total amount of feed required per mEq ranged from 1.6113.1 kg for the four most abundant ions (NO3, Ca, H2PO4 and K) at a density of 2 kg fish m, suggesting that it would be rather easy to maintain small populations of fish to reduce the cost of hydroponic solution supplementation for strawberries. Additional key words: aquaculture; nutrients; recirculating system; tilapia; water quality.", "title": "" } ]
scidocsrr
17721be74ea798ccfb2887688ddac0ee
EEG signal analysis for BCI application using fuzzy system
[ { "docid": "447840ecc14ef022c04f14f3ea3ec8a0", "text": "Healthcare plays an important role in promoting the general health and well-being of people around the world. The difficulty in healthcare data classification arises from the uncertainty and the high-dimensional nature of the medical data collected. This paper proposes an integration of fuzzy standard additive model (SAM) with genetic algorithm (GA), called GSAM, to deal with uncertainty and computational challenges. GSAM learning process comprises three continual steps: rule initialization by unsupervised learning using the adaptive vector quantization clustering, evolutionary rule optimization by GA and parameter tuning by the gradient descent supervised learning. Wavelet transformation is employed to extract discriminative features for high-dimensional datasets. GSAM becomes highly capable when deployed with small number of wavelet features as its computational burden is remarkably reduced. The proposed method is evaluated using two frequently-used medical datasets: the Wisconsin breast cancer and Cleveland heart disease from the UCI Repository for machine learning. Experiments are organized with a five-fold cross validation and performance of classification techniques are measured by a number of important metrics: accuracy, F-measure, mutual information and area under the receiver operating characteristic curve. Results demonstrate the superiority of the GSAM compared to other machine learning methods including probabilistic neural network, support vector machine, fuzzy ARTMAP, and adaptive neuro-fuzzy inference system. The proposed approach is thus helpful as a decision support system for medical practitioners in the healthcare practice. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "08634303d285ec95873e003eeac701eb", "text": "This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electroencephalogram (EEG) signals. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Five types of EEG signals were used as input patterns of the five ANFIS classifiers. To improve diagnostic accuracy, the sixth ANFIS classifier (combining ANFIS) was trained using the outputs of the five ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the EEG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the EEG signals.", "title": "" } ]
[ { "docid": "b7944edc9e6704cbf59489f112f46c11", "text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001", "title": "" }, { "docid": "fb439563ed9052bf9cc261cdcf5ef34a", "text": "With the quickly increase development of website and web application, internet user utilize that benefits. They make their all day to day daily life activity like reading a newspaper, online shopping, online payment etc. Hence the chances of the users to get caught in the web threat its called phishing attack. There for the phishing detection is necessary. There is no conclusive solution to detect phishing. In this paper we present novel technique to detect phishing attack and compare with the other existing technique. Our proposed framework work on combine algorithm of rule mining and SVM.", "title": "" }, { "docid": "5dc898dc6c9dd35994170cf134de3be6", "text": "This paper investigates a new approach in straw row position and orientation reconstruction in an open field, based on image segmentation with Fully Convolutional Networks (FCN). The model architecture consists of an encoder (for feature extraction) and decoder (produces segmentation map from encoded features) modules and similar to [1] except for two fully connected layers. The heatmaps produced by the FCN are used to determine orientations and spatial arrangments of the straw rows relatively to harvester via transforming the bird's eye view and Fast Hough Transform (FHT). This leads to real-time harvester trajectory optimization over treated area of the field by correction conditions calculation through the row’s directions family.", "title": "" }, { "docid": "be7d32aeffecc53c5d844a8f90cd5ce0", "text": "Wordnets play a central role in many natural language processing tasks. This paper introduces a multilingual editing system for the Open Multilingual Wordnet (OMW: Bond and Foster, 2013). Wordnet development, like most lexicographic tasks, is slow and expensive. Moving away from the original Princeton Wordnet (Fellbaum, 1998) development workflow, wordnet creation and expansion has increasingly been shifting towards an automated and/or interactive system facilitated task. In the particular case of human edition/expansion of wordnets, a few systems have been developed to aid the lexicographers’ work. Unfortunately, most of these tools have either restricted licenses, or have been designed with a particular language in mind. We present a webbased system that is capable of multilingual browsing and editing for any of the hundreds of languages made available by the OMW. All tools and guidelines are freely available under an open license.", "title": "" }, { "docid": "b3a9ad04e7df1b2250f0a7b625509efd", "text": "Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments.This paper describes a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.", "title": "" }, { "docid": "5e2ee8afe2f74c8bc30e48fb9dc6409c", "text": "Change detection can be treated as a generative learning procedure, in which the connection between bitemporal images and the desired change map can be modeled as a generative one. In this letter, we propose an unsupervised change detection method based on generative adversarial networks (GANs), which has the ability of recovering the training data distribution from noise input. Here, the joint distribution of the two images to be detected is taken as input and an initial difference image (DI), generated by traditional change detection method such as change vector analysis, is used to provide prior knowledge for sampling the training data based on Bayesian theorem and GAN’s min–max game theory. Through the continuous adversarial learning, the shared mapping function between the training data and their corresponding image patches can be built in GAN’s generator, from which a better DI can be generated. Finally, an unsupervised clustering algorithm is used to analyze the better DI to obtain the desired binary change map. Theoretical analysis and experimental results demonstrate the effectiveness and robustness of the proposed method.", "title": "" }, { "docid": "e27da58188be54b71187d3489fa6b4e7", "text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.", "title": "" }, { "docid": "4410effe71e07d8414c31198b84afa4b", "text": "SILVA (from Latin silva, forest, http://www.arb-silva.de) is a comprehensive web resource for up to date, quality-controlled databases of aligned ribosomal RNA (rRNA) gene sequences from the Bacteria, Archaea and Eukaryota domains and supplementary online services. The referred database release 111 (July 2012) contains 3 194 778 small subunit and 288 717 large subunit rRNA gene sequences. Since the initial description of the project, substantial new features have been introduced, including advanced quality control procedures, an improved rRNA gene aligner, online tools for probe and primer evaluation and optimized browsing, searching and downloading on the website. Furthermore, the extensively curated SILVA taxonomy and the new non-redundant SILVA datasets provide an ideal reference for high-throughput classification of data from next-generation sequencing approaches.", "title": "" }, { "docid": "0c038efcb192f6d51665fb6b276828d9", "text": "With the increasingly rich of vulnerability related data and the extensive application of machine learning methods, software vulnerability analysis methods based on machine learning is becoming an important research area of information security. In this paper, the up-to-date and well-known works in this research area were analyzed deeply. A framework for software vulnerability analysis based on machine learning was proposed. And the existing works were described and compared, the limitations of these works were discussed. The future research directions on software vulnerability analysis based on machine learning were put forward in the end.", "title": "" }, { "docid": "17c3e9af0d6bc8cd4e0915df0b9b2bf3", "text": "The focus of the three previous chapters has been on context-free grammars and their use in automatically generating constituent-based representations. Here we present another family of grammar formalisms called dependency grammars that Dependency grammar are quite important in contemporary speech and language processing systems. In these formalisms, phrasal constituents and phrase-structure rules do not play a direct role. Instead, the syntactic structure of a sentence is described solely in terms of the words (or lemmas) in a sentence and an associated set of directed binary grammatical relations that hold among the words. The following diagram illustrates a dependency-style analysis using the standard graphical method favored in the dependency-parsing community. (14.1) I prefer the morning flight through Denver nsubj dobj det nmod nmod case root Relations among the words are illustrated above the sentence with directed, labeled arcs from heads to dependents. We call this a typed dependency structure Typed dependency because the labels are drawn from a fixed inventory of grammatical relations. It also includes a root node that explicitly marks the root of the tree, the head of the entire structure. Figure 14.1 shows the same dependency analysis as a tree alongside its corresponding phrase-structure analysis of the kind given in Chapter 11. Note the absence of nodes corresponding to phrasal constituents or lexical categories in the dependency parse; the internal structure of the dependency parse consists solely of directed relations between lexical items in the sentence. These relationships directly encode important information that is often buried in the more complex phrase-structure parses. For example, the arguments to the verb prefer are directly linked to it in the dependency structure, while their connection to the main verb is more distant in the phrase-structure tree. Similarly, morning and Denver, modifiers of flight, are linked to it directly in the dependency structure. A major advantage of dependency grammars is their ability to deal with languages that are morphologically rich and have a relatively free word order. For Free word order example, word order in Czech can be much more flexible than in English; a grammatical object might occur before or after a location adverbial. A phrase-structure grammar would need a separate rule for each possible place in the parse tree where such an adverbial phrase could occur. A dependency-based approach would just have one link type representing this particular adverbial relation. Thus, a dependency grammar approach abstracts away from word-order information, …", "title": "" }, { "docid": "1a38695797b921e35e0987eeed11c95d", "text": "We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded in data in this way may be easier to learn, generalize better, and be less dependent on accurate prior models than, for example, POMDP state representations. Building on prior work by Jaeger and by Rivest and Schapire, in this paper we compare and contrast a linear specialization of the predictive approach with the state representations used in POMDPs and in k-order Markov models. Ours is the first specific formulation of the predictive idea that includes both stochasticity and actions (controls). We show that any system has a linear predictive state representation with number of predictions no greater than the number of states in its minimal POMDP model. In predicting or controlling a sequence of observations, the concepts of state and state estimation inevitably arise. There have been two dominant approaches. The generative-model approach, typified by research on partially observable Markov decision processes (POMDPs), hypothesizes a structure for generating observations and estimates its state and state dynamics. The history-based approach, typified by k-order Markov methods, uses simple functions of past observations as state, that is, as the immediate basis for prediction and control. (The data flow in these two approaches are diagrammed in Figure 1.) Of the two, the generative-model approach is more general. The model's internal state gives it temporally unlimited memorythe ability to remember an event that happened arbitrarily long ago--whereas a history-based approach can only remember as far back as its history extends. The bane of generative-model approaches is that they are often strongly dependent on a good model of the system's dynamics. Most uses of POMDPs, for example, assume a perfect dynamics model and attempt only to estimate state. There are algorithms for simultaneously estimating state and dynamics (e.g., Chrisman, 1992), analogous to the Baum-Welch algorithm for the uncontrolled case (Baum et al., 1970), but these are only effective at tuning parameters that are already approximately correct (e.g., Shatkay & Kaelbling, 1997). observations (and actions) (a) state 1-----1-----1..rep'n observations¢E (and actions) / state t/' rep'n 1-step --+ . delays", "title": "" }, { "docid": "d8e9a4d827be8f75ce601935032d829c", "text": "UNLABELLED\nThe management of amyelic thoracolumbar burst fractures remains controversial. In this study, we compared the clinical efficacy of percutaneous kyphoplasty (PKP) and short-segment pedicle instrumentation (SSPI). Twenty-three patients were treated with PKP, and 25 patients with SSPI. They all presented with Type A3 amyelic thoracolumbar fractures. Clinical outcomes were evaluated by a Visual Analog Scale (VAS) and Oswestry Disability Index (ODI) preoperatively, postoperatively, and at two years follow-up. Radiographic data including the anterior and posterior vertebral body height, kyphotic angle, as well as spinal canal compromise was also evaluated. The patients in both groups were similar regarding age, bone mineral density (BMD), follow-up period, severity of the deformity and fracture. Blood loss, operation time, and bed-rest time were less in the PKP group. VAS, ODI score improved more rapidly after surgery in the PKP group. No significant difference was found in VAS and ODI scores between the two groups at final follow-up (p > 0.05). Meanwhile, the height of anterior vertebrae (Ha), the height of posterior vertebrae (Hp) and the kyphosis angle showed significant improvement in each group (p < 0.05). The postoperative improvement in spinal canal compromise was not statistically significant in the PKP group (p > 0.05); there was a significant improvement in the SSPI group (p < 0.05). Moreover, these postoperative radiographic assessments showed significant differences between the two groups regarding the improvement of canal compromise (p < 0.05). At final follow-up, remodeling of spinal canal compromise was detected in both groups.\n\n\nCONCLUSION\nBoth PKP and SSPI appeared as effective and reliable operative techniques for selected amyelic thoracolumbar fractures in the short-term. PKP had a significantly smaller blood loss and shorter bed-rest time, but SSPI provided a better reduction. Long-time studies should be conducted to support these clinical outcomes.", "title": "" }, { "docid": "78e5f70a0037cd14a5bb991d89a2940e", "text": "Regression testing is a necessary but expensive maintenance activity aimed at showing that code has not been adversely affected by changes. Regression test selection techniques reuse tests from an existing test suite to test a modified program. Many regression test selection techniques have been proposed; however, it is difficult to compare and evaluate these techniques because they have different goals. This paper outlines the issues relevant to regression test selection techniques, and uses these issues as the basis for a framework within which to evaluate the techniques. We illustrate the application of our framework by using it to evaluate existing regression test selection techniques. The evaluation reveals the strengths and weaknesses of existing techniques, and highlights some problems that future work in this area should address.", "title": "" }, { "docid": "61813ea707d29f4e005759b6979b7db5", "text": "This paper presents a method for automatic classification of birds into different species based on the audio recordings of their sounds. Each individual syllable segmented from continuous recordings is regarded as the basic recognition unit. To represent the temporal variations as well as sharp transitions within a syllable, a feature set derived from static and dynamic two-dimensional Mel-frequency cepstral coefficients are calculated for the classification of each syllable. Since a bird might generate several types of sounds with variant characteristics, a number of representative prototype vectors are used to model different syllables of identical bird species. For each bird species, a model selection method is developed to determine the optimal mode between Gaussian mixture models (GMM) and vector quantization (VQ) when the amount of training data is different for each species. In addition, a component number selection algorithm is employed to find the most appropriate number of components of GMM or the cluster number of VQ for each species. The mean vectors of GMM or the cluster centroids of VQ will form the prototype vectors of a certain bird species. In the experiments, the best classification accuracy is 84.06% for the classification of 28 bird species.", "title": "" }, { "docid": "d84ef527d58d70b3c559d21608901d2f", "text": "Whistleblowing on organizational wrongdoing is becoming increasingly prevalent. What aspects of the person, the context, and the transgression relate to whistleblowing intentions and to actual whistleblowing on corporate wrongdoing? Which aspects relate to retaliation against whistleblowers? Can we draw conclusions about the whistleblowing process by assessing whistleblowing intentions? Meta-analytic examination of 193 correlations obtained from 26 samples (N = 18,781) reveals differences in the correlates of whistleblowing intentions and actions. Stronger relationships were found between personal, contextual, and wrongdoing characteristics and whistleblowing intent than with actual whistleblowing. Retaliation might best be predicted using contextual variables. Implications for research and practice are discussed.", "title": "" }, { "docid": "3f82f5b9f146e38311334bd71ea4588b", "text": "We present a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other related state of the art techniques which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Normally, when optimizing for pose, it is traditional to use some fixed set of features, e.g. edges or chamfer maps. In contrast, our novel approach consists of optimizing a cost function based on a Markov Random Field (MRF). This has the advantage that we can use all the information in the image: edges, background and foreground appearances, as well as the prior information on the shape and pose of the subject and combine them in a Bayesian framework. Previously, optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Note that although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any specified rigid, deformable or articulated object.", "title": "" }, { "docid": "021789cea259697f236986028218e3f6", "text": "In the IT world of corporate networking, how businesses store and compute data is starting to shift from in-house servers to the cloud. However, some enterprises are still hesitant to make this leap to the cloud because of their information security and data privacy concerns. Enterprises that want to invest into this service need to feel confident that the information stored on the cloud is secure. Due to this need for confidence, trust is one of the major qualities that cloud service providers (CSPs) must build for cloud service users (CSUs). To do this, a model that all CSPs can follow must exist to establish a trust standard in the industry. If no concrete model exists, the future of cloud computing will be stagnant. This paper presents a new trust model that involves all the cloud stakeholders such as CSU, CSP, and third-party auditors. Our proposed trust model is objective since it involves third-party auditors to develop unbiased trust between the CSUs and the CSPs. Furthermore, to support the implementation of the proposed trust model, we rank CSPs according to the trust-values obtained from the trust model. The final score for each participating CSP will be determined based on the third-party assessment and the feedback received from the CSUs.", "title": "" }, { "docid": "c6dc5ef03785a581b9496114c9438fca", "text": "In this brief, we propose a novel multilabel learning framework, called multilabel self-paced learning, in an attempt to incorporate the SPL scheme into the regime of multilabel learning. Specifically, we first propose a new multilabel learning formulation by introducing a self-paced function as a regularizer, so as to simultaneously prioritize label learning tasks and instances in each iteration. Considering that different multilabel learning scenarios often need different self-paced schemes during learning, we thus provide a general way to find the desired self-paced functions. To the best of our knowledge, this is the first work to study multilabel learning by jointly taking into consideration the complexities of both training instances and labels. Experimental results on four publicly available data sets suggest the effectiveness of our approach, compared with the state-of-the-art methods.", "title": "" }, { "docid": "af4d583cf45d13c09e59a927905a7794", "text": "Background and aims: Addiction to internet and mobile phone may be affecting all aspect of student’s life. Knowledge about prevalence and related factors of internet and mobile phone addiction is necessary for planning for prevention and treatment. This study was conducted to evaluate the prevalence of internet and mobile phone addiction among Iranian students. Methods: This cross sectional study conducted from Jun to April 2015 in Rasht Iran. With using stratified sampling method, 581 high school students from two region of Rasht in North of Iran were recruited as the subjects for this study. Data were collected with using demographics questionnaire, Cell phone Overuse Scale (COS), and the Internet Addiction Test (IAT). Analysis was performed using Statistical Package for Social Sciences (SPSS) 17 21 version. Results: Of the 581 students, who participate in present study, 53.5% were female and the rest were male. The mean age of students was 16.28±1.01 years. The mean score of IAT was 42.03±18.22. Of the 581 students, 312 (53.7%), 218 (37.5%) and 51 (8.8%) showed normal, mild and moderate level of internet addiction. The mean score of COS was 55.10±19.86.Of the 581 students, 27(6/4%), 451(6/77) and 103 (7/17) showed low, moderate and high level of mobile phone addiction. Conclusion: according to finding of present study, rate of mobile phone and internet addiction were high among Iranian students. Health care authorities should pay more attention to these problems.", "title": "" }, { "docid": "a1d6a739b10ec93229c33e0a8607e75e", "text": "We present and discuss the important business problem of estimating the effect of retention efforts on the Lifetime Value of a customer in the Telecommunications industry. We discuss the components of this problem, in particular customer value and length of service (or tenure) modeling, and present a novel segment-based approach, motivated by the segment-level view marketing analysts usually employ. We then describe how we build on this approach to estimate the effects of retention on Lifetime Value. Our solution has been successfully implemented in Amdocs' Business Insight (BI) platform, and we illustrate its usefulness in real-world scenarios.", "title": "" } ]
scidocsrr
e3edcfbdf67031fc576bea90445950f8
The Design and Implementation of a Mobile RFID Tag Sorting Robot
[ { "docid": "382ed00313a1769a135c625d529b735e", "text": "Activity monitoring in home environments has become increasingly important and has the potential to support a broad array of applications including elder care, well-being management, and latchkey child safety. Traditional approaches involve wearable sensors and specialized hardware installations. This paper presents device-free location-oriented activity identification at home through the use of existing WiFi access points and WiFi devices (e.g., desktops, thermostats, refrigerators, smartTVs, laptops). Our low-cost system takes advantage of the ever more complex web of WiFi links between such devices and the increasingly fine-grained channel state information that can be extracted from such links. It examines channel features and can uniquely identify both in-place activities and walking movements across a home by comparing them against signal profiles. Signal profiles construction can be semi-supervised and the profiles can be adaptively updated to accommodate the movement of the mobile devices and day-to-day signal calibration. Our experimental evaluation in two apartments of different size demonstrates that our approach can achieve over 96% average true positive rate and less than 1% average false positive rate to distinguish a set of in-place and walking activities with only a single WiFi access point. Our prototype also shows that our system can work with wider signal band (802.11ac) with even higher accuracy.", "title": "" }, { "docid": "973cb430e42b76a041a0f1f3315d700b", "text": "A growing number of mobile computing applications are centered around the user's location. The notion of location is broad, ranging from physical coordinates (latitude/longitude) to logical labels (like Starbucks, McDonalds). While extensive research has been performed in physical localization, there have been few attempts in recognizing logical locations. This paper argues that the increasing number of sensors on mobile phones presents new opportunities for logical localization. We postulate that ambient sound, light, and color in a place convey a photo-acoustic signature that can be sensed by the phone's camera and microphone. In-built accelerometers in some phones may also be useful in inferring broad classes of user-motion, often dictated by the nature of the place. By combining these optical, acoustic, and motion attributes, it may be feasible to construct an identifiable fingerprint for logical localization. Hence, users in adjacent stores can be separated logically, even when their physical positions are extremely close. We propose SurroundSense, a mobile phone based system that explores logical localization via ambience fingerprinting. Evaluation results from 51 different stores show that SurroundSense can achieve an average accuracy of 87% when all sensing modalities are employed. We believe this is an encouraging result, opening new possibilities in indoor localization.", "title": "" }, { "docid": "abc223dce354cc69af7555a09868813d", "text": "RFID technology has been widely adopted in a variety of applications from logistics to access control. Many applications gain benefits from knowing the exact position of an RFID-tagged object. Existing localization algorithms in wireless network, however, can hardly be directly employed due to tag's limited capabilities in terms of energy and memory. For example, the RSS based methods are vulnerable to both distance and tag orientation, while AOA based methods put a strict constraint on the antennas' spacing that reader's directional antennas are too large to meet. In this paper, we propose BackPos, a fine-grained backscatter positioning technique using the COTS RFID products with detected phase. Our study shows that the phase is indeed a stable indicator highly related to tag's position and preserved over frequency or tag orientation, but challenged by its periodicity and tag's diversity. We attempt to infer the distance differences from phases detected by antennas under triangle constraint. Further, hyperbolic positioning using the distance differences is employed to shrink the tag's candidate positions until filtering out the real one. In combination with interrogation zone, we finally relax the triangle constraint and allow arbitrary deployment of antennas by sacrificing the feasible region. We implement a prototype of BackPos with COTS RFID products and evaluate this design in various scenarios. The results show that BackPos achieves the mean accuracy of 12.8cm with variance of 3.8cm.", "title": "" } ]
[ { "docid": "6f72b40f0971bde8d21ff8008ca8edbf", "text": "In this paper, we propose multi-stage and deformable deep convolutional neural networks for object detection. This new deep learning object detection diagram has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (defpooling) layer models the deformation of object parts with geometric constraint and penalty. With the proposed multistage training strategy, multiple classifiers are jointly o ptimized to process samples at different difficulty levels. A ne w pre-training strategy is proposed to learn feature represe ntations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of modeling averaging. The proposed approach ranked #2 in ILSVRC 2014. It improves the mean averaged precision obtained by RCNN, which is the stateof-the-art of object detection, from31% to 45%. Detailed component-wise analysis is also provided through extensiv experimental evaluation.", "title": "" }, { "docid": "c00c6539b78ed195224063bcff16fb12", "text": "Information Retrieval (IR) systems assist users in finding information from the myriad of information resources available on the Web. A traditional characteristic of IR systems is that if different users submit the same query, the system would yield the same list of results, regardless of the user. Personalised Information Retrieval (PIR) systems take a step further to better satisfy the user’s specific information needs by providing search results that are not only of relevance to the query but are also of particular relevance to the user who submitted the query. PIR has thereby attracted increasing research and commercial attention as information portals aim at achieving user loyalty by improving their performance in terms of effectiveness and user satisfaction. In order to provide a personalised service, a PIR system maintains information about the users and the history of their interactions with the system. This information is then used to adapt the users’ queries or the results so that information that is more relevant to the users is retrieved and presented. This survey paper features a critical review of PIR systems, with a focus on personalised search. The survey provides an insight into the stages involved in building and evaluating PIR systems, namely: information gathering, information representation, personalisation execution, and system evaluation. Moreover, the survey provides an analysis of PIR systems with respect to the scope of personalisation addressed. The survey proposes a classification of PIR systems into three scopes: individualised systems, community-based systems, and aggregate-level systems. Based on the conducted survey, the paper concludes by highlighting challenges and future research directions in the field of PIR.", "title": "" }, { "docid": "482ff6c78f7b203125781f5947990845", "text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.", "title": "" }, { "docid": "b3a2be2d02946449ef32546d097220f1", "text": "A half-rate bang-bang phase and frequency detector (BBPFD) is presented for continuous-rate clock and data recovery (CDR) circuits. The proposed half-rate BBPFD not only preserves the advantages of conventional BBPDs, but also has the infinite unilateral frequency detection range. To verify the proposed circuit, a continuous-rate CDR circuit with the proposed BBPFD has been fabricated in a 0.18um CMOS process. It can recover the NRZ data with the bit rate ranging from 622 Mbps to 3.125 Gbps. The measured bit-error rate is less than 10-12. The core area is 0.33 x 0.27 mm2 and the power consumption is 80 mW from a 1.8 V supply.", "title": "" }, { "docid": "15dbd6af7840bdfe54609873dd1a0ad9", "text": "As software systems become increasingly complex to build developers are turning more and more to integrating pre-built components from third party developers into their systems. This use of Commercial Off-The-Shelf (COTS) software components in system construction presents new challenges to system architects and designers. This paper is an experience report that describes issues raised when integrating COTS components, outlines strategies for integration, and presents some informal rules we have developed that ease the development and maintenance of such systems.", "title": "" }, { "docid": "4de4ab2be955c318ffbd58924af9271f", "text": "The amount of Short Message Service (SMS) spam is increasing. Various solutions to filter SMS spam on mobile phones have been proposed. Most of these use Text Classification techniques that consist of training, filtering, and updating processes. However, they require a computer or a large amount of SMS data in advance to filter SMS spam, especially for the training. This increases hardware maintenance and communication costs. Thus, we propose to filter SMS spamon independentmobile phones using Text Classification techniques. The training, filtering, and updating processes are performed on an independent mobile phone. The mobile phone has storage, memory and CPU limitations compared with a computer. As such, we apply a probabilistic Naïve Bayes classifier using word occurrences for screening because of its simplicity and fast performance. Our experiment on an Android mobile phone shows that it can filter SMS spamwith reasonable accuracy, minimum storage consumption, and acceptable processing time without support from a computer or using a large amount of SMS data for training. Thus, we conclude that filtering SMS spam can be performed on independent mobile phones. We can reduce the number of word attributes by almost 50% without reducing accuracy significantly, using our usability-based approach. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "8746c488535baf8d715232811ca4c8ed", "text": "To optimize polysaccharide extraction from Spirulina sp., the effect of solid-to-liquid ratio, extraction temperature and time were investigated using Box-Behnken experimental design and response surface methodology. The results showed that extraction temperature and solid-to-liquid ratio had a significant impact on the yield of polysaccharides. A polysaccharides yield of around 8.3% dry weight was obtained under the following optimized conditions: solid-to-liquid ratio of 1:45, temperature of 90°C, and time of 120 min. The polysaccharide extracts contained rhamnose, which accounted for 53% of the total sugars, with a phenolic content of 45 mg GAE/g sample.", "title": "" }, { "docid": "4052ea81bf445a5c1d95e51dc32c5962", "text": "Most sentiment analysis work has been carried out on highly subjective text types where the target is clearly defined and unique across the text (movie or product reviews). However, when applying sentiment analysis to the news domain, it is necessary to clearly define the scope of the task, in a more specific manner than it has been done in the field so far. The main tasks we identified for news opinion mining are: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. It is furthermore important to distinguish three different possible views on newspaper articles – author, reader and text. These have to be addressed differently at the time of analysing sentiment, especially the case of author intention and reader interpretation, where specific profiles must be defined if the proper sentiment is to be extracted.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "4cb540a7d4e95db595d7cc17b3616d00", "text": "The design tradeoffs of the class-D amplifier (CDA) for driving piezoelectric (PZ) speakers are presented, including efficiency, linearity, and electromagnetic interference. An implementation is proposed to achieve high efficiency in the CDA architecture for PZ speakers to extend battery life in mobile devices. A self-oscillating closed-loop architecture is used to obviate the need for a carrier signal generator to achieve low power consumption. The use of stacked-cascode CMOS transistors at the H-bridge output stage provides low-input capacitance to allow high-switching frequency to improve linearity with high efficiency. Moreover, the CDA monolithic implementation achieves 18 VPP output voltage swing in a low-voltage CMOS technology without requiring expensive high-voltage semiconductor devices. The prototype experimental results achieved a minimum THD + N of 0.025%, and a maximum efficiency of 96%. Compared to available CDA for PZ speakers, the proposed CDA achieved higher linearity, lower power consumption, and higher efficiency.", "title": "" }, { "docid": "91f3268092606d2bd1698096e32c824f", "text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。", "title": "" }, { "docid": "695c396f27ba31f15f7823511473925c", "text": "Design and experimental analysis of beam steering in microstrip patch antenna array using dumbbell shaped Defected Ground Structure (DGS) for S-band (5.2 GHz) application was carried out in this study. The Phase shifting in antenna has been achieved using different size and position of dumbbell shape DGS. DGS has characteristics of slow wave, wide stop band and compact size. The obtained radiation pattern has provided steerable main lobe and nulls at predefined direction. The radiation pattern for different size and position of dumbbell structure in microstrip patch antenna array was measured and comparative study has been carried out.", "title": "" }, { "docid": "a50a9f45b25f21ce4ef04f686d25e36f", "text": "Twitter is the largest and most popular micro-blogging website on Internet. Due to low publication barrier, anonymity and wide penetration, Twitter has become an easy target or platform for extremists to disseminate their ideologies and opinions by posting hate and extremism promoting tweets. Millions of tweets are posted on Twitter everyday and it is practically impossible for Twitter moderators or an intelligence and security analyst to manually identify such tweets, users and communities. However, automatic classification of tweets into predefined categories is a non-trivial problem problem due to short text of the tweet (the maximum length of a tweet can be 140 characters) and noisy content (incorrect grammar, spelling mistakes, presence of standard and non-standard abbreviations and slang). We frame the problem of hate and extremism promoting tweet detection as a one-class or unary-class categorization problem by learning a statistical model from a training set containing only the objects of one class . We propose several linguistic features such as presence of war, religious, negative emotions and offensive terms to discriminate hate and extremism promoting tweets from other tweets. We employ a single-class SVM and KNN algorithm for one-class classification task. We conduct a case-study on Jihad, perform a characterization study of the tweets and measure the precision and recall of the machine-learning based classifier. Experimental results on large and real-world dataset demonstrate that the proposed approach is effective with F-score of 0.60 and 0.83 for the KNN and SVM classifier respectively.", "title": "" }, { "docid": "200a0213fddd98661f8634fe00affb16", "text": "Ontology learning has been an important research area in the Semantic Web field in the last 20 years. Ontology learning systems generate domain models from data (typically text) using a combination of sophisticated methods. In this poster, we study the use of Google’s word2vec to emulate a simple ontology learning system, and compare the results to an existing “traditional” ontology learning system.", "title": "" }, { "docid": "cdaba8e8d86ca072607880eb5408e441", "text": "The bridged T-coil, often simply called the T-coil, is a circuit topology that extends the bandwidth by a greater factor than does inductive peaking. Many high-speed amplifiers, line drivers, and input/output (I/O) interfaces in today's wireline systems incorporate on-chip T-coils to deal with parasitic capacitances. In this article, we introduce and analyze the basic structure and study its applications.", "title": "" }, { "docid": "8f4c629147db41356763de733aea618b", "text": "The application of simulation software in the planning process is state-of-the-art at many railway infrastructure managers. On the one hand software tools are used to point out the demand for new infrastructure and on the other hand they are used to optimize traffic flow in railway networks by support of the time table related processes. This paper deals with the first application of the software tool called OPENTRACK for simulation of railway operation on an existing line in Croatia from Zagreb to Karlovac. Aim of the work was to find out if the actual version of OPENTRACK able to consider the Croatian signalling system. Therefore the capability arises to use it also for other investigations in railway operation.", "title": "" }, { "docid": "c4dbfff3966e2694727aa171e29fa4bd", "text": "The ability to recognize known places is an essential competence of any intelligent system that operates autonomously over longer periods of time. Approaches that rely on the visual appearance of distinct scenes have recently been developed and applied to large scale SLAM scenarios. FAB-Map is maybe the most successful of these systems. Our paper proposes BRIEF-Gist, a very simplistic appearance-based place recognition system based on the BRIEF descriptor. BRIEF-Gist is much more easy to implement and more efficient compared to recent approaches like FAB-Map. Despite its simplicity, we can show that it performs comparably well as a front-end for large scale SLAM. We benchmark our approach using two standard datasets and perform SLAM on the 66 km long urban St. Lucia dataset.", "title": "" }, { "docid": "6e653e8c6b0074d065b02af81ddcc627", "text": "The existing research on lone wolf terrorists and case experience are reviewed and interpreted through the lens of psychoanalytic theory. A number of characteristics of the lone wolf are enumerated: a personal grievance and moral outrage; the framing of an ideology; failure to affiliate with an extremist group; dependence on a virtual community found on the Internet; the thwarting of occupational goals; radicalization fueled by changes in thinking and emotion - including cognitive rigidity, clandestine excitement, contempt, and disgust - regardless of the particular ideology; the failure of sexual pair bonding and the sexualization of violence; the nexus of psychopathology and ideology; greater creativity and innovation than terrorist groups; and predatory violence sanctioned by moral (superego) authority. A concluding psychoanalytic formulation is offered.", "title": "" }, { "docid": "40f9ed887e310e386c040b4d743e4039", "text": "The design and performance of a miniaturized coplanar capacitive sensor is presented whose electrode arrays can also function as resistive microheaters for thermocapillary actuation of liquid films and droplets. Optimal compromise between large capacitive signal and high spatial resolution is obtained for electrode widths comparable to the liquid film thickness measured, in agreement with supporting numerical simulations which include mutual capacitance effects. An interdigitated, variable width design, allowing for wider central electrodes, increases the capacitive signal for liquid structures with non-uniform height profiles. The capacitive resolution and time response of the current design is approximately 0.03 pF and 10 ms, respectively, which makes possible a number of sensing functions for nanoliter droplets. These include detection of droplet position, size, composition or percentage water uptake for hygroscopic liquids. Its rapid response time allows measurements of the rate of mass loss in evaporating droplets.", "title": "" } ]
scidocsrr
1ef6c4bf5b741807fee8047feaba1d3a
Brain MRI super-resolution using deep 3D convolutional networks
[ { "docid": "3768b0373b9c2c38ad30987fbce92915", "text": "Image super-resolution (SR) aims to recover high-resolution images from their low-resolution counterparts for improving image analysis and visualization. Interpolation methods, widely used for this purpose, often result in images with blurred edges and blocking effects. More advanced methods such as total variation (TV) retain edge sharpness during image recovery. However, these methods only utilize information from local neighborhoods, neglecting useful information from remote voxels. In this paper, we propose a novel image SR method that integrates both local and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The optimization problem can be solved effectively via alternating direction method of multipliers (ADMM). Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolation, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.", "title": "" } ]
[ { "docid": "e371f9b6ed1a8799e201d6d76ba6c5a1", "text": "A 13-year-old girl with virginal hypertrophy (bilateral extensive juvenile hypertrophy) of the breasts is presented. Her breasts began to grow rapidly after puberty and reached an enormous size within a year. On examination, both breasts were greatly enlarged. Routine blood chemistry and the endocrinological investigations were normal. The computerized tomography scan of the sella was unremarkable. A bilateral reduction mammaplasty was performed, and histological analysis of the breast tissue revealed the diagnosis of virginal hypertrophy. After four months her breasts began to grow again, and a second mammaplasty was performed. After this operation, tamoxifen citrate was given to prevent recurrence for four months, and during the follow-up period of 20 months, no recurrence was noted.", "title": "" }, { "docid": "09404689f2d1620ac85966c19a2671b5", "text": "Purpose. An upsurge of pure red cell aplasia (PRCA) cases associated with subcutaneous treatment with epoetin alpha has been reported. A formulation change introduced in 1998 is suspected to be the reason for the induction of antibodies that also neutralize the native protein. The aim of this study was to detect the mechanism by which the new formulation may induce these antibodies. Methods. Formulations of epoetin were subjected to gel permeation chromatography with UV detection, and the fractions were analyzed by an immunoassay for the presence of epoetin. Results. The chromatograms showed that Eprex®/Erypo® contained micelles of Tween 80. A minute amount of epoetin (0.008-0.033% of the total epoetin content) coeluted with the micelles, as evidenced by ELISA. When 0.03% (w/v) Tween 80, corresponding to the concentration in the formulation, was added to the elution medium, the percentage of epoetin eluting before the main peak was 0.68%. Conclusions. Eprex®/Erypo® contains micelle-associated epoetin, which may be a risk factor for the development of antibodies against epoetin.", "title": "" }, { "docid": "1b6ddffacc50ad0f7e07675cfe12c282", "text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.", "title": "" }, { "docid": "b31bae9e7c95e070318df8279cdd18d5", "text": "This article focuses on the ethical analysis of cyber warfare, the warfare characterised by the deployment of information and communication technologies. It addresses the vacuum of ethical principles surrounding this phenomenon by providing an ethical framework for the definition of such principles. The article is divided in three parts. The first one considers cyber warfare in relation to the so-called information revolution and provides a conceptual analysis of this kind of warfare. The second part focuses on the ethical problems posed by cyber warfare and describes the issues that arise when Just War Theory is endorsed to address them. The final part introduces Information Ethics as a suitable ethical framework for the analysis of cyber warfare, and argues that the vacuum of ethical principles for this kind warfare is overcome when Just War Theory and Information Ethics are merged together.", "title": "" }, { "docid": "73b62ff6e2a9599d465f25e554ad0fb7", "text": "Rapid advancements in technology coupled with drastic reduction in cost of storage have resulted in tremendous increase in the volumes of stored data. As a consequence, analysts find it hard to cope with the rates of data arrival and the volume of data, despite the availability of many automated tools. In a digital investigation context where it is necessary to obtain information that led to a security breach and corroborate them is the contemporary challenge. Traditional techniques that rely on keyword based search fall short of interpreting data relationships and causality that is inherent to the artifacts, present across one or more sources of information. The problem of handling very large volumes of data, and discovering the associations among the data, emerges as an important contemporary challenge. The work reported in this paper is based on the use of metadata associations and eliciting the inherent relationships. We study the metadata associations methodology and introduce the algorithms to group artifacts. We establish that grouping artifacts based on metadata can provide a volume reduction of at least $$ {\\raise0.7ex\\hbox{$1$} \\!\\mathord{\\left/ {\\vphantom {1 {2M}}}\\right.\\kern-0pt} \\!\\lower0.7ex\\hbox{${2M}$}} $$ 1 2 M , even on a single source, where M is the largest number of metadata associated with an artifact in that source. The value of M is independent of inherently available metadata on any given source. As one understands the underlying data better, one can further refine the value of M iteratively thereby enhancing the volume reduction capabilities. We also establish that such reduction in volume is independent of the distribution of metadata associations across artifacts in any given source. We systematically develop the algorithms necessary to group artifacts on an arbitrary collection of sources and study the complexity.", "title": "" }, { "docid": "5f77218388ee927565a993a8e8c48ef3", "text": "The paper presents an idea of Lexical Platform proposed as a means for a lightweight integration of various lexical resources into one complex (from the perspective of non-technical users). All LRs will be represented as software web components implementing a minimal set of predefined programming interfaces providing functionality for querying and generating simple common presentation format. A common data format for the resources will not be required. Users will be able to search, browse and navigate via resources on the basis of anchor elements of a limited set of types. Lexical resources linked to the platform via components will preserve their identity.", "title": "" }, { "docid": "ed4178ec9be6f4f8e87a50f0bf1b9a41", "text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.", "title": "" }, { "docid": "494030ce6b5294bf3ebdf2f89788230b", "text": "Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks.", "title": "" }, { "docid": "6f89c0f3f6590d32bd5e71ee876a65e2", "text": "Plant growth-promoting rhizobacteria (PGPR) are naturally occurring soil bacteria that aggressively colonize plant roots and benefit plants by providing growth promotion. Inoculation of crop plants with certain strains of PGPR at an early stage of development improves biomass production through direct effects on root and shoots growth. Inoculation of ornamentals, forest trees, vegetables, and agricultural crops with PGPR may result in multiple effects on early-season plant growth, as seen in the enhancement of seedling germination, stand health, plant vigor, plant height, shoot weight, nutrient content of shoot tissues, early bloom, chlorophyll content, and increased nodulation in legumes. PGPR are reported to influence the growth, yield, and nutrient uptake by an array of mechanisms. They help in increasing nitrogen fixation in legumes, help in promoting free-living nitrogen-fixing bacteria, increase supply of other nutrients, such as phosphorus, sulphur, iron and copper, produce plant hormones, enhance other beneficial bacteria or fungi, control fungal and bacterial diseases and help in controlling insect pests. There has been much research interest in PGPR and there is now an increasing number of PGPR being commercialized for various crops. Several reviews have discussed specific aspects of growth promotion by PGPR. In this review, we have discussed various bacteria which act as PGPR, mechanisms and the desirable properties exhibited by them.", "title": "" }, { "docid": "e165cac5eb7ad77b43670e4558011210", "text": "PURPOSE\nTo retrospectively review our experience in infants with glanular hypospadias or hooded prepuce without meatal anomaly, who underwent circumcision with the plastibell device. Although circumcision with the plastibell device is well described, there are no reported experiences pertaining to hooded prepuce or glanular hypospadias that have been operated on by this technique.\n\n\nMATERIALS AND METHODS\nBetween September 2002 and September 2008, 21 children with hooded prepuce (age 1 to 11 months, mean 4.6 months) were referred for hypospadias repair. Four of them did not have meatal anomaly. Their parents accepted this small anomaly and requested circumcision without glanuloplasty. In all cases, the circumcision was corrected by a plastibell device.\n\n\nRESULTS\nNo complications occurred in the circumcised patients, except delayed falling of bell in one case that was removed by a surgeon, after the tenth day.\n\n\nCONCLUSION\nCircumcision with the plastibell device is a suitable method for excision of hooded prepuce. It can also be used successfully in infants, who have miniglanular hypospadias, and whose parents accepted this small anomaly.", "title": "" }, { "docid": "db0d0348ae9cd4fa225629d154ed9501", "text": "In this paper, we present a systematic study for the detection of malicious applications (or apps) on popular Android Markets. To this end, we first propose a permissionbased behavioral footprinting scheme to detect new samples of known Android malware families. Then we apply a heuristics-based filtering scheme to identify certain inherent behaviors of unknown malicious families. We implemented both schemes in a system called DroidRanger. The experiments with 204, 040 apps collected from five different Android Markets in May-June 2011 reveal 211 malicious ones: 32 from the official Android Market (0.02% infection rate) and 179 from alternative marketplaces (infection rates ranging from 0.20% to 0.47%). Among those malicious apps, our system also uncovered two zero-day malware (in 40 apps): one from the official Android Market and the other from alternative marketplaces. The results show that current marketplaces are functional and relatively healthy. However, there is also a clear need for a rigorous policing process, especially for non-regulated alternative marketplaces.", "title": "" }, { "docid": "eaa37c0420dbc804eaf480d1167ad201", "text": "This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.", "title": "" }, { "docid": "b66609e66cc9c3844974b3246b8f737e", "text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …", "title": "" }, { "docid": "22bed4d5c38a096ae24a76dce7fc5136", "text": "BACKGROUND\nMedical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by existing metrics.\n\n\nRESULT\nFirst we present an overview of 20 evaluation metrics selected based on a comprehensive literature review. For fuzzy segmentation, which shows the level of membership of each voxel to multiple classes, fuzzy definitions of all metrics are provided. We present a discussion about metric properties to provide a guide for selecting evaluation metrics. Finally, we propose an efficient evaluation tool implementing the 20 selected metrics. The tool is optimized to perform efficiently in terms of speed and required memory, also if the image size is extremely large as in the case of whole body MRI or CT volume segmentation. An implementation of this tool is available as an open source project.\n\n\nCONCLUSION\nWe propose an efficient evaluation tool for 3D medical image segmentation using 20 evaluation metrics and provide guidelines for selecting a subset of these metrics that is suitable for the data and the segmentation task.", "title": "" }, { "docid": "7768c834a837d8f02ce91c4949f87d59", "text": "Gamified systems benefit from various gamification-elements to motivate users and encourage them to persist in their quests towards a goal. This paper proposes a categorization of gamification-elements and learners' motivation type to enrich a learning management system with the advantages of personalization and gamification. This categorization uses the learners' motivation type to assign gamification-elements in learning environments. To find out the probable relations between gamification-elements and learners' motivation type, a field-research is done to measure learners' motivation along with their interests in gamification-elements. Based on the results of this survey, all the gamification-elements are categorized according to related motivation types, which form our proposed categorization. To investigate the effects of this personalization approach, a gamified learning management system is prepared. Our implemented system is evaluated in Technical English course at University of Tehran. Our experimental results on the average participation rate show the effectiveness of the personalization approach on the learners' motivation. Based on the paper findings, we suggest an integrated categorization of gamification-elements and learners' motivation type, which can further enhance the learners' motivation through personalization.", "title": "" }, { "docid": "610476babafbf2785ace600ed409638c", "text": "In the utility grid interconnection of photovoltaic (PV) energy sources, inverters determine the overall system performance, which result in the demand to route the grid connected transformerless PV inverters (GCTIs) for residential and commercial applications, especially due to their high efficiency, light weight, and low cost benefits. In spite of these benefits of GCTIs, leakage currents due to distributed PV module parasitic capacitances are a major issue in the interconnection, as they are undesired because of safety, reliability, protective coordination, electromagnetic compatibility, and PV module lifetime issues. This paper classifies the kW and above range power rating GCTI topologies based on their leakage current attributes and investigates and/illustrates their leakage current characteristics by making use of detailed microscopic waveforms of a representative topology of each class. The cause and quantity of leakage current for each class are identified, not only providing a good understanding, but also aiding the performance comparison and inverter design. With the leakage current characteristic investigation, the study places most topologies under small number of classes with similar leakage current attributes facilitating understanding, evaluating, and the design of GCTIs. Establishing a clear relation between the topology type and leakage current characteristic, the topology families are extended with new members, providing the design engineers a variety of GCTI topology configurations with different characteristics.", "title": "" }, { "docid": "bb483dd62b4b104b0314914557a0ae4b", "text": "At a recent Reddit AMA (Ask Me Anything), Emmett Shear, CEO of Twitch.tv, the leading live video platform for streaming games, claimed that in a decade e-sports will be bigger than athletic sports [7]. While his statement was both hyperbolic and speculative, the particulars were not: e-sports tournaments have spectator numbers in the millions, recent franchise games have logged over a billion hours of gameplay, while experts and amateur e-sports enthusiasts alike regularly broadcast and share their competitive play online [1, 4, 6, 8, 9, 10]. The growing passion for mainstream e-sports is apparent, though there are also interesting, less visible happenings on the periphery of the e-sports media industry - notably, the acts of life and death that happen off the polished main stage. Smaller tournaments have been cut to make way for major e-sports franchises [11]; games with a strong culture of dark play have attempted to encourage esport iterations, encountering conflict where bribery and espionage is interwoven with traditional sporting structures [2]; and third party organizations have created new ways to watch, participate, celebrate, but also profit from one's love of games [3]. In these actions, we find some of the ways in which competitive games and gaming lifestyles are extended, but also often dissolved from the main stages of e-sports. At a broader level, these events allow us to witness the growth and sedimentation of this new socio-technical form. Simultaneously, we observe its erosion as the practices and form of e-sports are subject to the compromises demanded by processes of cultural and audience reception, and attempts to maximise cultural appeal and commercial success. It is in the interplay between this ceaseless growth and erosion that the significance of e-sport can be found. E-sport represents a rare opportunity to observe the historical emergence of interactive gaming in a sporting 'skin', as well as new forms of sports-like competition realised through interactive gaming platforms. The position of this panel moves beyond questions of (sports) disciplinary rivalry to consider how e-sports extend our understanding of sports and competitive games more broadly. Drawing on qualitative studies, theoretical considerations, and practical work, this panel explores the tensions, but also the new \"sporting\" possibilities which emerge in moments of transition -- between the life and death of a tournament, the extension of spectatorship opportunities, the construction of a competitive gaming scene, and the question of how to best conceptualise e-sport at the intersection of gaming and sport.", "title": "" }, { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" }, { "docid": "129efeb93aad31aca7be77ef499398e2", "text": "Using a Neonatal Intensive Care Unit (NICU) case study, this work investigates the current CRoss Industry Standard Process for Data Mining (CRISP-DM) approach for modeling Intelligent Data Analysis (IDA)-based systems that perform temporal data mining (TDM). The case study highlights the need for an extended CRISP-DM approach when modeling clinical systems applying Data Mining (DM) and Temporal Abstraction (TA). As the number of such integrated TA/DM systems continues to grow, this limitation becomes significant and motivated our proposal of an extended CRISP-DM methodology to support TDM, known as CRISP-TDM. This approach supports clinical investigations on multi-dimensional time series data. This research paper has three key objectives: 1) Present a summary of the extended CRISP-TDM methodology; 2) Demonstrate the applicability of the proposed model to the NICU data, focusing on the challenges associated with multi-dimensional time series data; and 3) Describe the proposed IDA architecture for applying integrated TDM.", "title": "" }, { "docid": "4da68af0db0b1e16f3597c8820b2390d", "text": "We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted.\n As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.", "title": "" } ]
scidocsrr
9074859df353d196f00e062f6601a423
Human-robot interaction in rescue robotics
[ { "docid": "5d7f5a6981824a257fe3868375f1d18f", "text": "This paper describes a mobile robotic assistant, developed to assist elderly individuals with mild cognitive and physical impairments, as well as support nurses in their daily activities. We present three software modules relevant to ensure successful human–robot interaction: an automated reminder system; a people tracking and detection system; and finally a high-level robot controller that performs planning under uncertainty by incorporating knowledge from low-level modules, and selecting appropriate courses of actions. During the course of experiments conducted in an assisted living facility, the robot successfully demonstrated that it could autonomously provide reminders and guidance for elderly residents. a a Purchase Export", "title": "" } ]
[ { "docid": "82e866d42fed897b66e49c92209ad805", "text": "A fingerprinting design extracts discriminating features, called fingerprints. The extracted features are unique and specific to each image/video. The visual hash is usually a global fingerprinting technique with crypto-system constraints. In this paper, we propose an innovative video content identification process which combines a visual hash function and a local fingerprinting. Thanks to a visual hash function, we observe the video content variation and we detect key frames. A local image fingerprint technique characterizes the detected key frames. The set of local fingerprints for the whole video summarizes the video or fragments of the video. The video fingerprinting algorithm identifies an unknown video or a fragment of video within a video fingerprint database. It compares the local fingerprints of the candidate video with all local fingerprints of a database even if strong distortions are applied to an original content.", "title": "" }, { "docid": "81a1c561f60f281187ec6ae4c9f42129", "text": "In this paper, we describe a novel spectral conversion method for voice conversion (VC). A Gaussian mixture model (GMM) of the joint probability density of source and target features is employed for performing spectral conversion between speakers. The conventional method converts spectral parameters frame by frame based on the minimum mean square error. Although it is reasonably effective, the deterioration of speech quality is caused by some problems: 1) appropriate spectral movements are not always caused by the frame-based conversion process, and 2) the converted spectra are excessively smoothed by statistical modeling. In order to address those problems, we propose a conversion method based on the maximum-likelihood estimation of a spectral parameter trajectory. Not only static but also dynamic feature statistics are used for realizing the appropriate converted spectrum sequence. Moreover, the oversmoothing effect is alleviated by considering a global variance feature of the converted spectra. Experimental results indicate that the performance of VC can be dramatically improved by the proposed method in view of both speech quality and conversion accuracy for speaker individuality.", "title": "" }, { "docid": "4a9174770b8ca962e8135fbda0e41425", "text": "is published by Princeton University Press and copyrighted, © 2006, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers.", "title": "" }, { "docid": "cf1c04b4d0c61632d7a3969668d5e751", "text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.", "title": "" }, { "docid": "812c41737bb2a311d45c5566f773a282", "text": "Acceleration, sprint and agility performance are crucial in sports like soccer. There are few studies regarding the effect of training on youth soccer players in agility performance and in sprint distances shorter than 30 meter. Therefore, the aim of the recent study was to examine the effect of a high-intensity sprint and plyometric training program on 13-year-old male soccer players. A training group of 14 adolescent male soccer players, mean age (±SD) 13.5 years (±0.24) followed an eight week intervention program for one hour per week, and a group of 12 adolescent male soccer players of corresponding age, mean age 13.5 years (±0.23) served as control a group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.23 s (±0.34) to post 7.69 s (± 0.34) (p<0.01), and a significant improvement in 0-20m linear sprint, pre 3.54s (±0.17) to post 3.42s (±0.18) (p<0.05). In 0-10m sprint the participants also showed an improvement, pre 2.02s (±0.11) to post 1.96s (± 0.11), however this was not significant. The correlation between 10-m sprint and agility was r = 0.53 (p<0.01), and between 20-m linear sprint and agility performance, r = 0.67 (p<0.01). The major finding in the study is the significant improvement in agility performance and in 0-20 m linear sprint in the intervention group. These findings suggest that organizing the training sessions with short-burst high-intensity sprint and plyometric exercises interspersed with adequate recovery time, may result in improvements in both agility and in linear sprint performance in adolescent male soccer players. Another finding is the correlation between linear sprint and agility performance, indicating a difference when compared to adults. 4 | Mathisen: EFFECT OF HIGH-SPEED...", "title": "" }, { "docid": "5af470de0bc3ea61b1812374a09793b8", "text": "In this paper, we propose a fully convolutional network for iterative non-blind deconvolution. We decompose the non-blind deconvolution problem into image denoising and image deconvolution. We train a FCNN to remove noise in the gradient domain and use the learned gradients to guide the image deconvolution step. In contrast to the existing deep neural network based methods, we iteratively deconvolve the blurred images in a multi-stage framework. The proposed method is able to learn an adaptive image prior, which keeps both local (details) and global (structures) information. Both quantitative and qualitative evaluations on the benchmark datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of quality and speed.", "title": "" }, { "docid": "799047d8c129b5c67bc838a53e2ac7e7", "text": "This paper proposes a pre-regulator boost converter applied to a dc/dc converter in order to provide power factor correction. The combination of both stages results in a symmetrical switched power supply, which is composed of two symmetrical stages that operate at 100 kHz, as the individual output voltages are equal to +200 V/sub dc/ and -200 V/sub dc/, the total output voltage is 400 Vdc and the total output power is 500 W. The power factor correction IC UC3854 is employed in the control strategy of the boost stage.", "title": "" }, { "docid": "afd0656733192f479ac3989812647227", "text": "In this paper we present a novel method for automatic traffic accident detection, based on Smoothed Particles Hydrodynamics (SPH). In our method, a motion flow field is obtained from the video through dense optical flow extraction. Then a thermal diffusion process (TDP) is exploited to turn the motion flow field into a coherent motion field. Approximating the moving particles to individuals, their interaction forces, represented as endothermic reactions, are computed using the enthalpy measure, thus obtaining the potential particles of interest. Furthermore, we exploit SPH that accumulates the contribution of each particle in a weighted form, based on a kernel function. The experimental evaluation is conducted on a set of video sequences collected from Youtube, and the obtained results are compared against a state of the art technique.", "title": "" }, { "docid": "ccf6084095c4c4fc59483f680e40afee", "text": "This brief presents an identification experiment performed on the coupled dynamics of the edgewise bending vibrations of the rotor blades and the in-plane motion of the drivetrain of three-bladed wind turbines. These dynamics vary with rotor speed, and are subject to periodic wind flow disturbances. This brief demonstrates that this time-varying behavior can be captured in a linear parameter-varying (LPV) model with the rotor speed as the scheduling signal, and with additional sinusoidal inputs that are used as basis functions for the periodic wind flow disturbances. By including these inputs, the predictor-based LPV subspace identification approach (LPV PBSIDopt) was tailored for wind turbine applications. Using this tailor-made approach, the LPV model is identified from data measured with the three-bladed Controls Advanced Research Turbine (CART3) at the National Renewable Energy Laboratory's National Wind Technology Center.", "title": "" }, { "docid": "bf78bfc617dfe5a152ad018dacbd5488", "text": "Identifying and fixing defects is a crucial and expensive part of the software lifecycle. Measuring the quality of bug-fixing patches is a difficult task that affects both functional correctness and the future maintainability of the code base. Recent research interest in automatic patch generation makes a systematic understanding of patch maintainability and understandability even more critical. \n We present a human study involving over 150 participants, 32 real-world defects, and 40 distinct patches. In the study, humans perform tasks that demonstrate their understanding of the control flow, state, and maintainability aspects of code patches. As a baseline we use both human-written patches that were later reverted and also patches that have stood the test of time to ground our results. To address any potential lack of readability with machine-generated patches, we propose a system wherein such patches are augmented with synthesized, human-readable documentation that summarizes their effects and context. Our results show that machine-generated patches are slightly less maintainable than human-written ones, but that trend reverses when machine patches are augmented with our synthesized documentation. Finally, we examine the relationship between code features (such as the ratio of variable uses to assignments) with participants' abilities to complete the study tasks and thus explain a portion of the broad concept of patch quality.", "title": "" }, { "docid": "085f2b04f6f7c6d9a140d3ef027cbeca", "text": "E-Government implementation and adoption is influenced by several factors having either an enhancing or an aggravating effect on e-government implementation and use. This paper aims at shedding light on obstacles hindering mainly e-government implementation from two perspectives: the supply- and the demand-side of e-government services. The contribution to research is seen in summarized insights into what obstacles in e-government were identified in prior research and the suggestion of a classification of obstacles into the two categories of formal and informal obstacles. Literature was reviewed following a conceptual model encompassing a merger and extension of existing approaches. A process of identifying obstacles and improving services in the form of a loop is discussed before possible future research lines will be pointed to.", "title": "" }, { "docid": "5d673f5297919e6307dc2861d10ddfe6", "text": "Given the increased testing of school-aged children in the United States there is a need for a current and valid scale to measure the effects of test anxiety in children. The domain of children’s test anxiety was theorized to be comprised of three dimensions: thoughts, autonomic reactions, and off-task behaviors. Four stages are described in the evolution of the Children’s Test Anxiety Scale (CTAS): planning, construction, quantitative evaluation, and validation. A 50-item scale was administered to a development sample (N /230) of children in grades 3 /6 to obtain item analysis and reliability estimates which resulted in a refined 30-item scale. The reduced scale was administered to a validation sample (N /261) to obtain construct validity evidence. A three-factor structure fit the data reasonably well. Recommendations for future research with the scale are described.", "title": "" }, { "docid": "e83873daee4f8dae40c210987d9158e8", "text": "Domain ontologies are important information sources for knowledge-based systems. Yet, building domain ontologies from scratch is known to be a very labor-intensive process. In this study, we present our semi-automatic approach to building an ontology for the domain of wind energy which is an important type of renewable energy with a growing share in electricity generation all over the world. Related Wikipedia articles are first processed in an automated manner to determine the basic concepts of the domain together with their properties and next the concepts, properties, and relationships are organized to arrive at the ultimate ontology. We also provide pointers to other engineering ontologies which could be utilized together with the proposed wind energy ontology in addition to its prospective application areas. The current study is significant as, to the best of our knowledge, it proposes the first considerably wide-coverage ontology for the wind energy domain and the ontology is built through a semi-automatic process which makes use of the related Web resources, thereby reducing the overall cost of the ontology building process.", "title": "" }, { "docid": "4a6c7b68ea23f910f0edc35f4542e5cb", "text": "Microgrids have been proposed in order to handle the impacts of Distributed Generators (DGs) and make conventional grids suitable for large scale deployments of distributed generation. However, the introduction of microgrids brings some challenges. Protection of a microgrid and its entities is one of them. Due to the existence of generators at all levels of the distribution system and two distinct operating modes, i.e. Grid Connected and Islanded modes, the fault currents in a system vary substantially. Consequently, the traditional fixed current relay protection schemes need to be improved. This paper presents a conceptual design of a microgrid protection system which utilizes extensive communication to monitor the microgrid and update relay fault currents according to the variations in the system. The proposed system is designed so that it can respond to dynamic changes in the system such as connection/disconnection of DGs.", "title": "" }, { "docid": "0107d7777a01050a75fbe06bde3a397b", "text": "To review our current knowledge of the pathologic bone metabolism in otosclerosis and to discuss the possibilities of non-surgical, pharmacological intervention. Otosclerosis has been suspected to be associated with defective measles virus infection, local inflammation and consecutive bone deterioration in the human otic capsule. In the early stages of otosclerosis, different pharmacological agents may delay the progression or prevent further deterioration of the disease and consecutive hearing loss. Although effective anti-osteoporotic drugs have become available, the use of sodium fluoride and bisphosphonates in otosclerosis has not yet been successful. Bioflavonoids may relieve tinnitus due to otosclerosis, but there is no data available on long-term application and effects on sensorineural hearing loss. In the initial inflammatory phase, corticosteroids or non-steroidal anti-inflammatory drugs may be effective; however, extended systemic application may lead to serious side effects. Vitamin D administration may have effects on the pathological bone loss, as well as on inflammation. No information has been reported on the use of immunosuppressive drugs. Anti-cytokine targeted biological therapy, however, may be feasible. Indeed, one study on the local administration of infliximab has been reported. Potential targets of future therapy may include osteoprotegerin, RANK ligand, cathepsins and also the Wnt-β-catenin pathway. Finally, anti-measles vaccination may delay the progression of the disease and potentially decrease the number of new cases. In conclusion, stapes surgery remains to be widely accepted treatment of conductive hearing loss due to otosclerosis. Due to lack of solid evidence, the place of pharmacological treatment targeting inflammation and bone metabolism needs to be determined by future studies.", "title": "" }, { "docid": "c553ea1a03550bdc684dbacbb9bef385", "text": "NeuCoin is a decentralized peer-to-peer cryptocurrency derived from Sunny King’s Peercoin, which itself was derived from Satoshi Nakamoto’s Bitcoin. As with Peercoin, proof-of-stake replaces proof-of-work as NeuCoin’s security model, effectively replacing the operating costs of Bitcoin miners (electricity, computers) with the capital costs of holding the currency. Proof-of-stake also avoids proof-of-work’s inherent tendency towards centralization resulting from competition for coinbase rewards among miners based on lowest cost electricity and hash power. NeuCoin increases security relative to Peercoin and other existing proof-of-stake currencies in numerous ways, including: (1) incentivizing nodes to continuously stake coins over time through substantially higher mining rewards and lower minimum stake age; (2) abandoning the use of coin age in the mining formula; (3) causing the stake modifier parameter to change over time for each stake; and (4) utilizing a client that punishes nodes that attempt to mine on multiple branches with duplicate stakes. This paper demonstrates how NeuCoin’s proof-of-stake implementation addresses all commonly raised “nothing at stake” objections to generic proof-of-stake systems. It also reviews many of the flaws of proof-of-work designs to highlight the potential for an alternate cryptocurrency that solves these flaws.", "title": "" }, { "docid": "c238e600d072b7239934978b9f37a076", "text": "ifferentiation of benign and malignant (melanoma) of the pigmented skin lesions is difficult even for the dermatologists thus in this paper a new analysis of the dermatoscopic images have been proposed. Segmentation, feature extraction and classification are the major steps of images analysis. In Segmentation step we use an improved FFCM based segmentation method (our previous work) to achieve to binary segmented image. In feature extraction step, the shape features are extracted from the binary segmented image. After normalizing of the features, in classification step, the feature vectors are classified into two groups (benign and malignant) by SVM classifier. The classification result for the accuracy is 71.39%, specificity is 85.95%, and it has the satisfactory results in sensitivity metrics.", "title": "" }, { "docid": "6a2c7d43cde643f295ace71f5681285f", "text": "Quantum mechanics and information theory are among the most important scientific discoveries of the last century. Although these two areas initially developed separately, it has emerged that they are in fact intimately related. In this review the author shows how quantum information theory extends traditional information theory by exploring the limits imposed by quantum, rather than classical, mechanics on information storage and transmission. The derivation of many key results differentiates this review from the usual presentation in that they are shown to follow logically from one crucial property of relative entropy. Within the review, optimal bounds on the enhanced speed that quantum computers can achieve over their classical counterparts are outlined using information-theoretic arguments. In addition, important implications of quantum information theory for thermodynamics and quantum measurement are intermittently discussed. A number of simple examples and derivations, including quantum superdense coding, quantum teleportation, and Deutsch’s and Grover’s algorithms, are also included.", "title": "" }, { "docid": "8e5c2bfb2ef611c94a02c2a214c4a968", "text": "This paper defines and explores a somewhat different type of genetic algorithm (GA) a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either underor overspecified with respect to the problem being solved . As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 3D-bit, orderthree-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worstcase ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeatedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers. © 1989 Complex Systems Publications , Inc. 494 David E. Goldberg, Bradley Kotb, an d Kalyanmoy Deb", "title": "" }, { "docid": "cd8bd76ecebbd939400b4724499f7592", "text": "Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depthspecific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.", "title": "" } ]
scidocsrr
6f0bd49ae03d8bcb3fb3a549c55da8a8
Accident Detection and Smart Rescue System using Android Smartphone with Real-Time Location Tracking
[ { "docid": "ff952443eef41fb430ff2831b5ee33d5", "text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.", "title": "" } ]
[ { "docid": "394410f85e2911eb95678472e35bb9e1", "text": "The purpose of this article was to build a license plates recognition system with high accuracy at night. The system, based on regular PC, catches video frames which include a visible car license plate and processes them. Once a license plate is detected, its digits are recognized, and then checked against a database. The focus is on the modified algorithms to identify the individual characters. In this article, we use the template-matching method and neural net method together, and make some progress on the study before. The result showed that the accuracy is higher at night.", "title": "" }, { "docid": "3ae8865602c53847a0eec298c698a743", "text": "BACKGROUND\nA low ratio of utilization of healthcare services in postpartum women may contribute to maternal deaths during the postpartum period. The maternal mortality ratio is high in the Philippines. The aim of this study was to examine the current utilization of healthcare services and the effects on the health of women in the Philippines who delivered at home.\n\n\nMETHODS\nThis was a cross-sectional analytical study, based on a self-administrated questionnaire, conducted from March 2015 to February 2016 in Muntinlupa, Philippines. Sixty-three postpartum women who delivered at home or at a facility were enrolled for this study. A questionnaire containing questions regarding characteristics, utilization of healthcare services, and abnormal symptoms during postpartum period was administered. To analyze the questionnaire data, the sample was divided into delivery at home and delivery at a facility. Chi-square test, Fisher's exact test, and Mann-Whitney U test were used.\n\n\nRESULTS\nThere were significant differences in the type of birth attendant, area of residence, monthly income, and maternal and child health book usage between women who delivered at home and those who delivered at a facility (P<0.01). There was significant difference in the utilization of antenatal checkup (P<0.01) during pregnancy, whilst there was no significant difference in utilization of healthcare services during the postpartum period. Women who delivered at home were more likely to experience feeling of irritated eyes and headaches, and continuous abdominal pain (P<0.05).\n\n\nCONCLUSION\nFinancial and environmental barriers might hinder the utilization of healthcare services by women who deliver at home in the Philippines. Low utilization of healthcare services in women who deliver at home might result in more frequent abnormal symptoms during postpartum.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" }, { "docid": "cb2309b5290572cf7211f69cac7b99e8", "text": "Real-time tracking of human body motion is an important technology in synthetic environments, robotics, and other human-computer interaction applications. This paper presents an extended Kalman filter designed for real-time estimation of the orientation of human limb segments. The filter processes data from small inertial/magnetic sensor modules containing triaxial angular rate sensors, accelerometers, and magnetometers. The filter represents rotation using quaternions rather than Euler angles or axis/angle pairs. Preprocessing of the acceleration and magnetometer measurements using the Quest algorithm produces a computed quaternion input for the filter. This preprocessing reduces the dimension of the state vector and makes the measurement equations linear. Real-time implementation and testing results of the quaternion-based Kalman filter are presented. Experimental results validate the filter design, and show the feasibility of using inertial/magnetic sensor modules for real-time human body motion tracking", "title": "" }, { "docid": "abe03f24c8e6116f8a9eba1d5dbaf867", "text": "Executive functions consist of multiple high-level cognitive processes that drive rule generation and behavioral selection. An emergent property of these processes is the ability to adjust behavior in response to changes in one's environment (i.e., behavioral flexibility). These processes are essential to normal human behavior, and may be disrupted in diverse neuropsychiatric conditions, including schizophrenia, alcoholism, depression, stroke, and Alzheimer's disease. Understanding of the neurobiology of executive functions has been greatly advanced by the availability of animal tasks for assessing discrete components of behavioral flexibility, particularly strategy shifting and reversal learning. While several types of tasks have been developed, most are non-automated, labor intensive, and allow testing of only one animal at a time. The recent development of automated, operant-based tasks for assessing behavioral flexibility streamlines testing, standardizes stimulus presentation and data recording, and dramatically improves throughput. Here, we describe automated strategy shifting and reversal tasks, using operant chambers controlled by custom written software programs. Using these tasks, we have shown that the medial prefrontal cortex governs strategy shifting but not reversal learning in the rat, similar to the dissociation observed in humans. Moreover, animals with a neonatal hippocampal lesion, a neurodevelopmental model of schizophrenia, are selectively impaired on the strategy shifting task but not the reversal task. The strategy shifting task also allows the identification of separate types of performance errors, each of which is attributable to distinct neural substrates. The availability of these automated tasks, and the evidence supporting the dissociable contributions of separate prefrontal areas, makes them particularly well-suited assays for the investigation of basic neurobiological processes as well as drug discovery and screening in disease models.", "title": "" }, { "docid": "0b8ec67f285c4186866f42305dfb7cf2", "text": "Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for temporal deep learning classifiers due to the equal treatments of majority and minority class. Until recently, there were few works applying deep learning on imbalanced time-series classification (ITSC) tasks. Here, this paper aimed at tackling ITSC problems with deep learning. An adaptive cost-sensitive learning strategy was proposed to modify temporal deep learning models. Through the proposed strategy, classifiers could automatically assign misclassification penalties to each class. In the experimental section, the proposed method was utilized to modify five neural networks. They were evaluated on a large volume, real-life and imbalanced time-series dataset with six metrics. Each single network was also tested alone and combined with several mainstream data samplers. Experimental results illustrated that the proposed costsensitive modified networks worked well on ITSC tasks. Compared to other methods, the cost-sensitive convolution neural network and residual network won out in the terms of all metrics. Consequently, the proposed cost-sensitive learning strategy can be used to modify deep learning classifiers from cost-insensitive to costsensitive. Those cost-sensitive convolutional networks can be effectively applied to address ITSC issues.", "title": "" }, { "docid": "16afaad8bfdc64f9d97e9829f2029bc6", "text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.", "title": "" }, { "docid": "8dc400d9745983da1e91f0cec70606c9", "text": "Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.\nWe found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.", "title": "" }, { "docid": "875bba98f3b6dcdc851798c9eef2aa3e", "text": "This paper presents a DC−30 GHz single-polefour-throw (SP4T) CMOS switch using 0.13 μm CMOS process. The CMOS transistor layout is done to minimize the substrate network resistance. The on-chip matching inductors and routing are designed for a very small die area (250 × 180 μm), and modeled using full-wave EM simulations. The SP4T CMOS switch result in an insertion loss of 1.8 dB and 2.7 dB at 5 GHz and 24 GHz, respectively. The isolation is > 25 dB up to 30 GHz and achieved using a series-shunt switch configuration. The measured input P1dB and IIP3 of the SP4T switch are 9 dBm and 21 dBm, respectively. To our knowledge, this is the first ultra wideband CMOS SP4T switch and with a very small chip area.", "title": "" }, { "docid": "1ee33deb30b4ffae5ea16dc4ad2f93ff", "text": "Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices. In order to train networks that can be effectively discretized without loss of performance, we introduce a differentiable quantization procedure. Differentiability can be achieved by transforming continuous distributions over the weights and activations of the network to categorical distributions over the quantization grid. These are subsequently relaxed to continuous surrogates that can allow for efficient gradient-based optimization. We further show that stochastic rounding can be seen as a special case of the proposed approach and that under this formulation the quantization grid itself can also be optimized with gradient descent. We experimentally validate the performance of our method on MNIST, CIFAR 10 and Imagenet classification.", "title": "" }, { "docid": "02d919498f5fd51bc6603b81d91ab1a2", "text": "This study was designed to understand which factors influence consumer hesitation or delay in online product purchases. The study examined four groups of variables (i.e., consumer characteristics, contextual factors perceived uncertainty factors, and medium/channel innovation factors) that predict three types of online shopping hesitation (i.e., overall hesitation, shopping cart abandonment, and hesitation at the final payment stage). We found that different sets of delay factors are related to different aspects of online shopping hesitation. The study concludes with suggestion for various delay-reduction devices to help consumers close their online decision hesitation.", "title": "" }, { "docid": "c59e0968b2d4dc314e52c116b21c3659", "text": "This document aims to clarify frequent questions on using the Accord.NET Framework to perform statistical analyses. Here, we reproduce all steps of the famous Lindsay's Tutorial on Principal Component Analysis, in an attempt to give the reader a complete hands-on overview on the framework's basics while also discussing some of the results and sources of divergence between the results generated by Accord.NET and by other software packages.", "title": "" }, { "docid": "3a4841b9aefdd0f96125132eaabdac49", "text": "Unstructured text data produced on the internet grows rapidly, and sentiment analysis for short texts becomes a challenge because of the limit of the contextual information they usually contain. Learning good vector representations for sentences is a challenging task and an ongoing research area. Moreover, learning long-term dependencies with gradient descent is difficult in neural network language model because of the vanishing gradients problem. Natural Language Processing (NLP) systems traditionally treat words as discrete atomic symbols; the model can leverage small amounts of information regarding the relationship between the individual symbols. In this paper, we propose ConvLstm, neural network architecture that employs Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) on top of pre-trained word vectors. In our experiments, ConvLstm exploit LSTM as a substitute of pooling layer in CNN to reduce the loss of detailed local information and capture long term dependencies in sequence of sentences. We validate the proposed model on two sentiment datasets IMDB, and Stanford Sentiment Treebank (SSTb). Empirical results show that ConvLstm achieved comparable performances with less parameters on sentiment analysis tasks.", "title": "" }, { "docid": "b2af36852b94260f692241eef651cc88", "text": "This paper describes empirical research into agile requirements engineering (RE) practices. Based on an analysis of data collected in 16 US software development organizations, we identify six agile practices. We also identify seven challenges that are created by the use of these practices. We further analyse how this collection of practices helps mitigate some, while exacerbating other risks in RE. We provide a framework for evaluating the impact and appropriateness of agile RE practices by relating them to RE risks. Two risks that are intractable by agile RE practices emerge from the analysis. First, problems with customer inability and a lack of concurrence among customers significantly impact agile development. Second, risks associated with the neglecting non-functional requirements such as security and scalability are a serious concern. Developers should carefully evaluate the risk factors in their project environment to understand whether the benefits of agile RE practices outweigh the costs imposed by the challenges.", "title": "" }, { "docid": "c2da7d6aa76a08c98239ddb3ed07ef33", "text": "Several lines of evidence suggest that altered serotonin (5-HT) function persists after recovery from anorexia nervosa (AN) and bulimia nervosa (BN). We compared 11 subjects who recovered (>1 year normal weight, regular menstrual cycles, no bingeing or purging) from restricting-type AN (REC RAN), 7 who recovered from bulimia-type AN (REC BAN), 9 who recovered from BN (REC BN), and 10 healthy control women (CW). Positron emission tomography (PET) imaging with [11C]McN5652 was used to assess the 5-HT transporter (5-HTT). For [11C]McN5652, distribution volume (DV) values were determined using a two-compartment, three-parameter tracer kinetic model, and specific binding was assessed using the binding potential (BP, BP = DVregion of interest/DVcerebellum − 1). After correction for multiple comparisons, the four groups showed significant (p < 0.05) differences for [11C]McN5652 BP values for the dorsal raphe and antero-ventral striatum (AVS). Post-hoc analysis revealed that REC RAN had significantly increased [11C]McN5652 BP compared to REC BAN in these regions. Divergent 5-HTT activity in subtypes of eating disorder subjects may provide important insights as to why these groups have differences in affective regulation and impulse control.", "title": "" }, { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "8fbec2539107e58a6cd4e6266dc20ccc", "text": "The Indoor flights of UAV (Unmanned Aerial Vehicle) are susceptible to impacts of multiples obstacles and walls. The most basic controller that a drone requires in order to achieve indoor flight, is a controller that can maintain the drone flying in the same site, this is called hovering control. This paper presents a fuzzy PID controller for hovering. The control system to modify the gains of the parameters of the PID controllers in the x and y axes as a function of position and error in each axis, of a known environment. Flight tests were performed over an AR.Drone 2.0, comparing RMSE errors of hovering with classical PID and fuzzy PID under disturbances. The fuzzy PID controller reduced the average error from 11 cm to 8 cm in a 3 minutes test. This result is an improvement over previously published works.", "title": "" }, { "docid": "d244509f1f38b93d2c04b4b4fa8070a4", "text": "Recent research has shown the usefulness of using collective user interaction data (e.g., query logs) to recommend query modification suggestions for Intranet search. However, most of the query suggestion approaches for Intranet search follow an ``one size fits all'' strategy, whereby different users who submit an identical query would get the same query suggestion list. This is problematic, as even with the same query, different users may have different topics of interest, which may change over time in response to the user's interaction with the system.\n We address the problem by proposing a personalised query suggestion framework for Intranet search. For each search session, we construct two temporal user profiles: a click user profile using the user's clicked documents and a query user profile using the user's submitted queries. We then use the two profiles to re-rank the non-personalised query suggestion list returned by a state-of-the-art query suggestion method for Intranet search. Experimental results on a large-scale query logs collection show that our personalised framework significantly improves the quality of suggested queries.", "title": "" }, { "docid": "ed9a9308c16fee2ff828306c83bcda6a", "text": "Cloud services have recently undergone a shift from monolithic applications to microservices, with hundreds or thousands of loosely-coupled microservices comprising the end-to-end application. Microservices present both opportunities and challenges when optimizing for quality of service (QoS) and cloud utilization. In this paper we explore the implications cloud microservices have on system bottlenecks, and datacenter server design. We first present and characterize an end-to-end application built using tens of popular open-source microservices that implements a movie renting and streaming service, and is modular and extensible. We then use the end-to-end service to study the scalability and performance bottlenecks of microservices, and highlight implications they have on the design of datacenter hardware. Specifically, we revisit the long-standing debate of brawny versus wimpy cores in the context of microservices, we quantify the I-cache pressure they introduce, and measure the time spent in computation versus communication between microservices over RPCs. As more cloud applications switch to this new programming model, it is increasingly important to revisit the assumptions we have previously used to build and manage cloud systems.", "title": "" }, { "docid": "4c2bfba9a36b4fb2f0e17b5feb58b22f", "text": "This paper introduces a new approach in automatic attendance management systems, extended with computer vision algorithms. We propose using real time face detection algorithms integrated on an existing Learning Management System (LMS), which automatically detects and registers students attending on a lecture. The system represents a supplemental tool for instructors, combining algorithms used in machine learning with adaptive methods used to track facial changes during a longer period of time. This new system aims to be less time consuming than traditional methods, at the same time being nonintrusive and not interfere with the regular teaching process. The tool promises to offer accurate results and a more detailed reporting system which shows student activity and attendance in a classroom.", "title": "" } ]
scidocsrr
14765e263b74a26d51193c8c359c38d8
A Fast and Accurate Multilevel Inversion of the Radon Transform
[ { "docid": "84320e0f9dfb72b561012a6c0d33232c", "text": "image reconstruction from projections the fundamentals of computerized tomography computer ebook, image reconstruction from projections the fundamentals of computerized tomography computer pdf, image reconstruction from projections the fundamentals of computerized tomography computer doc and image reconstruction from projections the fundamentals of computerized tomography computer epub for image reconstruction from projections the fundamentals of computerized tomography computer read online or image reconstruction from projections the fundamentals of computerized tomography computer download if want read offline.", "title": "" } ]
[ { "docid": "66cd5501be682957a2ee10ce91136c01", "text": "The use of inaccurate or outdated database statistics by the query optimizer in a relational DBMS often results in a poor choice of query execution plans and hence unacceptably long query processing times. Configuration and maintenance of these statistics has traditionally been a time-consuming manual operation, requiring that the database administrator (DBA) continually monitor query performance and data changes in order to determine when to refresh the statistics values and when and how to adjust the set of statistics that the DBMS maintains. In this paper we describe the new Automated Statistics Collection (ASC) component of IBM® DB2® Universal DatabaseTM (DB2 UDB). This autonomic technology frees the DBA from the tedious task of manually supervising the collection and maintenance of database statistics. ASC monitors both the update-delete-insert (UDI) activities on the data as well as query feedback (QF), i.e., the results of the queries that are executed on the data. ASC uses these two sources of information to automatically decide which statistics to collect and when to collect them. This combination of UDI-driven and QF-driven autonomic processes ensures that the system can handle unforeseen queries while also ensuring good performance for frequent and important queries. We present the basic concepts, architecture, and key implementation details of ASC in DB2 UDB, and present a case study showing how the use of ASC can speed up a query workload by orders of magnitude without requiring any DBA intervention.", "title": "" }, { "docid": "c5796e3bbe9500a8a14f03873880ca09", "text": "This review highlights the latest developments associated with the use of the Normalized Difference Vegetation Index (NDVI) in ecology. Over the last decade, the NDVI has proven extremely useful in predicting herbivore and non-herbivore distribution, abundance and life history traits in space and time. Due to the continuous nature of NDVI since mid-1981, the relative importance of different temporal and spatial lags on population performance can be assessed, widening our understanding of population dynamics. Previously thought to be most useful in temperate environments, the utility of this satellite-derived index has been demonstrated even in sparsely vegetated areas. Climate models can be used to reconstruct historical patterns in vegetation dynamics in addition to anticipating the effects of future environmental change on biodiversity. NDVI has thus been established as a crucial tool for assessing past and future population and biodiversity consequences of change in climate, vegetation phenology and primary productivity.", "title": "" }, { "docid": "f14272db4779239dc7d392ef7dfac52d", "text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10", "title": "" }, { "docid": "6b53dc83581b3832c39d9f5675d182e3", "text": "Single image layer separation aims to divide the observed image into two independent components according to special task requirements and has been widely used in many vision and multimedia applications. Because this task is fundamentally ill-posed, most existing approaches tend to design complex priors on the separated layers. However, the cost function with complex prior regularization is hard to optimize. The performance is also compromised by fixed iteration schemes and less data fitting ability. More importantly, it is also challenging to design a unified framework to separate image layers for different applications. To partially mitigate the above limitations, we develop a flexible optimization unrolling technique to incorporate deep architectures into iterations for adaptive image layer separation. Specifically, we first design a general energy model with implicit priors and adopt the widely used alternating direction method of multiplier (ADMM) to establish our basic iteration scheme. By unrolling with residual convolution architectures, we successfully obtain a simple, flexible, and data-dependent image separation method. Extensive experiments on the tasks of rain streak removal and reflection removal validate the effectiveness of our approach.", "title": "" }, { "docid": "ce031463581fd08813991404d6178014", "text": "With the development of social networks, fake news for various commercial and political purposes has been appearing in large numbers and gotten widespread in the online world. With deceptive words, people can get infected by the fake news very easily and will share them without any fact-checking. For instance, during the 2016 US president election, various kinds of fake news about the candidates widely spread through both official news media and the online social networks. These fake news is usually released to either smear the opponents or support the candidate on their side. The erroneous information in the fake news is usually written to motivate the voters’ irrational emotion and enthusiasm. Such kinds of fake news sometimes can bring about devastating effects, and an important goal in improving the credibility of online social networks is to identify the fake news timely. In this paper, we propose to study the “fake news detection” problem. Automatic fake news identification is extremely hard, since pure model based fact-checking for news is still an open problem, and few existing models can be applied to solve the problem. With a thorough investigation of a fake news data, lots of useful explicit features are identified from both the text words and images used in the fake news. Besides the explicit features, there also exist some hidden patterns in the words and images used in fake news, which can be captured with a set of latent features extracted via the multiple convolutional layers in our model. A model named as TI-CNN (Text and Image information based Convolutinal Neural Network) is proposed in this paper. By projecting the explicit and latent features into a unified feature space, TI-CNN is trained with both the text and image information simultaneously. Extensive experiments carried on the real-world fake news datasets have demonstrate the effectiveness of TI-CNN in solving the fake new detection problem.", "title": "" }, { "docid": "e144a814723d205855a61cb52466ce96", "text": "In this article, we discuss the development of automatic artifact reconstruction systems capable of coping with the realities of real-world geometric puzzles that anthropologists and archaeologists face on a daily basis. Such systems must do more than find matching fragments and subsequently align these matched fragments; these systems must be capable of simultaneously solving an unknown number of multiple puzzles where all of the puzzle pieces are mixed together in an unorganized pile and each puzzle may be missing an unknown number of its pieces. Discussion has cast the puzzle reconstruction problem into a generic terminology that is formalized appropriately for the 2-D and 3-D artifact reconstruction problems. Two leading approaches for 2-D tablet reconstruction and four leading approaches for 3-D object reconstruction have been discussed in detail, including partial or complete descriptions for the numerous algorithms upon which these systems rely. Several extensions to the geometric matching problem that use patterns apparent on the fragment outer surface were also discussed that generalize the problem beyond that of matching strictly geometry. The models needed for solving these problems are new and challenging, and most involve 3-D that is largely unexplored by the signal processing community. This work is highly relevant to the new 3-D signal processing that is looming on the horizon for tele-immersion.", "title": "" }, { "docid": "65af148678516aa5a55fc6df44956fcf", "text": "The inductive assertion method is generalized to permit formal, machine-verifiable proofs of correctness for multiprocess programs. Individual processes are represented by ordinary flowcharts, and no special synchronization mechanisms are assumed, so the method can be applied to a large class of multiprocess programs. A correctness proof can be designed together with the program by a hierarchical process of stepwise refinement, making the method practical for larger programs. The resulting proofs tend to be natural formalizations of the informal proofs that are now used.", "title": "" }, { "docid": "959618d50b59ce316cebb24a18375cde", "text": "Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each person's contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs.", "title": "" }, { "docid": "b1a656d86ed4c9469f8d2a04186ff8bc", "text": "The wealth of social information presented on Facebook is astounding. While these affordances allow users to keep up-to-date, they also produce a basis for social comparison and envy on an unprecedented scale. Even though envy may endanger users’ life satisfaction and lead to platform avoidance, no study exists uncovering this dynamics. To close this gap, we build on responses of 584 Facebook users collected as part of two independent studies. In study 1, we explore the scale, scope, and nature of envy incidents triggered by Facebook. In study 2, the role of envy feelings is examined as a mediator between intensity of passive following on Facebook and users’ life satisfaction. Confirming full mediation, we demonstrate that passive following exacerbates envy feelings, which decrease life satisfaction. From a provider’s perspective, our findings signal that users frequently perceive Facebook as a stressful environment, which may, in the long-run, endanger platform sustainability.", "title": "" }, { "docid": "31a325246ea254b05cd047b6fad27f77", "text": "It is a conventional wisdom in the speech community that better speech recognition accuracy is a good indicator for better spoken language understanding accuracy, given a fixed understanding component. The findings in this work reveal that this is not always the case. More important than word error rate reduction, the language model for recognition should be trained to match the optimization objective for understanding. In this work, we applied a spoken language understanding model as the language model in speech recognition. The model was obtained with an example-based learning algorithm that optimized the understanding accuracy. Although the speech recognition word error rate is 46% higher than the trigram model, the overall slot understanding error can be reduced by as much as 17%.", "title": "" }, { "docid": "62edabfb877e280dfe69035dc7d0f1cb", "text": "OBJECTIVES\nTo present the importance of Evidence-based Health Informatics (EBHI) and the ethical imperative of this approach; to highlight the work of the IMIA Working Group on Technology Assessment and Quality Improvement and the EFMI Working Group on Assessment of Health Information Systems; and to introduce the further important evaluation and evidence aspects being addressed.\n\n\nMETHODS\nReviews of IMIA, EFMA and other initiatives, together with literature reviews on evaluation methods and on published systematic reviews.\n\n\nRESULTS\nPresentation of the rationale for the health informatics domain to adopt a scientific approach by assessing impact, avoiding harm, and empirically demonstrating benefit and best use; reporting of the origins and rationale of the IMIA- and EQUATOR-endorsed Statement on Reporting of Evaluation Studies in Health Informatics (STARE-HI) and of the IMIA WG's Guideline for Good Evaluation Practice in Health Informatics (GEP-HI); presentation of other initiatives for objective evaluation; and outlining of further work in hand on usability and indicators; together with the case for development of relevant evaluation methods in newer applications such as telemedicine. The focus is on scientific evaluation as a reliable source of evidence, and on structured presentation of results to enable easy retrieval of evidence.\n\n\nCONCLUSIONS\nEBHI is feasible, necessary for efficiency and safety, and ethically essential. Given the significant impact of health informatics on health systems, care delivery and personal health, it is vital that cultures change to insist on evidence-based policies and investment, and that emergent global moves for this are supported.", "title": "" }, { "docid": "bac5b36d7da7199c1bb4815fa0d5f7de", "text": "During quadrupedal trotting, diagonal pairs of limbs are set down in unison and exert forces on the ground simultaneously. Ground-reaction forces on individual limbs of trotting dogs were measured separately using a series of four force platforms. Vertical and fore-aft impulses were determined for each limb from the force/time recordings. When mean fore-aft acceleration of the body was zero in a given trotting step (steady state), the fraction of vertical impulse on the forelimb was equal to the fraction of body weight supported by the forelimbs during standing (approximately 60 %). When dogs accelerated or decelerated during a trotting step, the vertical impulse was redistributed to the hindlimb or forelimb, respectively. This redistribution of the vertical impulse is due to a moment exerted about the pitch axis of the body by fore-aft accelerating and decelerating forces. Vertical forces exerted by the forelimb and hindlimb resist this pitching moment, providing stability during fore-aft acceleration and deceleration.", "title": "" }, { "docid": "fb5a3c43655886c0387e63cd02fccd50", "text": "Android is the most widely used smartphone OS with 82.8% market share in 2015 (IDC, 2015). It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost (Lindorfer et al., 2014). To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset (AndroCoverage, 2016). We show that it executes on average 13.52% more basic blocks than the Monkey program.", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" }, { "docid": "1294c785e37cf5028f5a1e0b4decb319", "text": "This paper describes a flow control of the dye in the paper mill with the quasi-Z-source indirect matrix converter (QZSIMC) fed induction motor drive. More than a decade the voltage-source inverter (VSI) and current-source inverter (CSI) have been used to control the speed of the induction motor, which in turns controls the flow of dye. Recently, the matrix converter (MC) has been an excellent competitor for the VSI or CSI for its compactness. The voltage transfer ratio of the VSI, CSI, and MC has been limited to 0.866. Thus, the efficiency of these converters is less. To improve the voltage transfer ratio the quasi-Z-source network (QZSN) is to be used between voltage source and indirect MC (IMC). The modification in the shoot-through duty ratio of the QZSN varies the voltage transfer ratio greater than 0.866. Different voltage transfer ratios are needed for different voltage sag conditions. For a variation of the duty ratio of the QZSN, the fuzzy logic controller has been presented. To control the IMC vector control with space vector modulation has been presented. This paper proposes the implementation of QZSIMC adjustable speed drive for the flow control of dye in paper mill during different voltage sag conditions. A 4-kW prototype has been built and the effectiveness of the proposed system is verified with simulation results and experimental setup. Simulation is done in MATLAB, Simulink platform. Experimental setup is done with the aid of a TMS320F2812 (Texas Instrument) processor. The experimental results validate the maintenance of the speed of an induction motor at the set condition, thus controlling the perfect flow of dye in paper manufacturing technology.", "title": "" }, { "docid": "7b0945c77fbe5b207ff02fce811e98e6", "text": "T he map has evolved over the past few centuries as humanity’s primary method for storing and communicating knowledge of the Earth’s surface. Topographic maps portray the general form of the surface and its primary physical and cultural features; thematic maps show the variation of more specialized properties such as soil type or population density; and bathymetric maps and hydrographic charts show the characteristics of the sea floor. Maps serve as one of the most important repositories of both the raw data and the results of geographic inquiry, and mapmaking has always figured prominently in the skill set of geographers or their supporting staff. Maps are thus important and indispensable tools in the geographer’s search for understanding of how human and physical processes act and interact on the Earth’s surface: of how the world works. Geographic information systems (GIS) were devised in the 1960s as computer applications for handling large volumes of information obtained from maps and for performing operations that would be too tedious, expensive, or inaccurate to perform by hand. The Canada Geographic Information System, widely recognized as the first GIS, was built for the purpose of making vast numbers of calculations of area, reporting the results in tables. Over time, the range of functions performed by GIS has grown exponentially, and today it is reasonable to think of a GIS as able to perform virtually any conceivable operation on data obtained from maps (Longley et al. 2001). Geographers have adopted GIS enthusiastically, seeing it as a powerful device for storing, analyzing, and visualizing map information and thus as a much more effective substitute for the paper map (Goodchild 1988). Over the past decade numerous journals, conferences, academic positions, and programs have adopted titles that combine information with spatial or geographic and with science or theory. In what follows I will use the term geographic information science (GIScience) for simplicity and not enquire into the subtle differences between, for example, spatial and geographic information theory (Goodchild 2001). Geographers have been associated with many of these changes—and, in many cases, have been at the forefront—and many of the new programs and positions are found in departments of geography. But there has been relatively little general commentary on these trends, or on what they might mean for the discipline of geography as a whole. The first centennial of the Association of American Geographers is an appropriate occasion to reflect on the nature of GIScience and its relationship, if any, to the discipline of geography. I begin with a discussion of the nature of GIScience, of its relationship to GIS and of its links to the traditional sciences of geographic information. This leads to a discussion of whether GIScience is a natural science, concerned with discovering empirical principles and law-like statements about the world; or whether it is a design science, concerned with identifying practical principles for achieving human ends, or both. In the third major section I examine how GIScience is positioned with respect to the historic tension in geography between form and process and whether the growth of interest in GIScience has tended to favor form over process. The final section examines a future for GIScience that places greater emphasis on process and discusses the steps that will be needed to make such a future possible.", "title": "" }, { "docid": "a3fe8cf8b2689269fe8a1050cf7789d2", "text": "A boosting algorithm, AdaBoost.RT, is proposed for regression problems. The idea is to filter out examples with a relative estimation error that is higher than the pre-set threshold value, and then follow the AdaBoost procedure. Thus it requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Some experimental results using the M5 model tree as a weak learning machine for benchmark data sets and for hydrological modeling are reported, and compared to other boosting methods, bagging and artificial neural networks, and to a single M5 model tree. AdaBoost.Rt is proved to perform better on most of the considered data sets.", "title": "" }, { "docid": "d10dc295173202332700918cab02ac2b", "text": "Markov logic networks (MLNs) have proven to be useful tools for reasoning about uncertainty in complex knowledge bases. In this paper, we extend MLNs with numerical constraints and present an efficient implementation in terms of a cutting plane method. This extension is useful for reasoning over uncertain temporal data. To show the applicability of this extension, we enrich log-linear description logics (DLs) with concrete domains (datatypes). Thereby, allowing to reason over weighted DLs with datatypes. Moreover, we use the resulting formalism to reason about temporal assertions in DBpedia, thus illustrating its practical use.", "title": "" }, { "docid": "2e0e54bd8d8bbaac19f861c951e80033", "text": "Self-service systems, online help systems, web services, mobile communication devices, remote control systems, and dashboard computers are providing ever more functionality. However, along with greater functionality, the user must also come to terms with the greater complexity and a steeper learning curve. This complexity is compounded by the sheer proliferation of different systems lacking a standard user interface. Conversational user interfaces allow various natural communication modes like speech, gestures and facial expressions for input as well as output and exploit the context in which an input is used to compute its meaning. The growing emphasis on conversational user interfaces is fundamentally inspired by the aim to support natural, flexible, efficient and powerfully expressive means of human-computer communication that are easy to learn and use. Advances in human language technology and intelligent user interfaces offer the promise of pervasive access to online information and web services. The development of conversational user interfaces allows the average person to interact with computers anytime and anywhere without special skills or training, using such common devices as a mobile phone. Advanced conversational user interfaces include the situated understanding of possibly imprecise, ambiguous or incomplete multimodal input and the generation of coordinated, cohesive, and coherent multimodal presentations. In conversational user interfaces the dialogue management is based on representing, reasoning, and exploiting models of the user, domain, task, context, and modalities. These systems are capable of real-time dialogue processing, including flexible multimodal turn-taking, backchanneling, and metacommunicative interaction. One important aspect of conversations is that the successive utterances of which it consists are often interconnected by cross references of various sorts. For instance, one utterance will use a pronoun to refer to something mentioned in the previous utterance. Computational models of discourse must be able to represent, compute and resolve such cross references. Conversational user interfaces differ in the degree with which the user or the system controls the conversation. In directed or menubased dialogues the system maintains tight control and the human is highly restricted in his dialogue behavior, whereas in free-form dialogue the human takes complete control and the system is totally passive. In mixed-initiative conversational user interfaces, the dialogue control moves back and forth between the system and the user like in most face-to-face conversations between humans. Four papers in this special issue deal with conversational user interfaces that use speech as the main mode of interaction. The paper by Helbig and Schindler discusses state-of-art component technologies and requirements for the successful deployment of conversational user interfaces in industrial environments such as logistics centers, assembly lines, and car inspection facilities. It shows that the speech recognition rate in such environments is still depending on the correct positioning and adjustment of the microphone and discusses the need for wireless microphones in most industrial applications of spoken dialogue systems. Block, Caspari and Schachtl describe an innovative dialogue engine for the Virtual Call Center Agent (ViCA), that provides access to product documentation. A multiframe based dialogue engine is introduced that supports natural conversations by allowing over-answering and free-order information input. The paper reports encouraging results from a usability test showing a high task completion rate. The paper by te Vrugt and Portele describes a tasked-oriented spoken dialogue system that allows the user to control a wide spectrum of infotainment applications, like a hard-disk recorder, an image browser, a music player, a TV set and an electronic program guide. The paper presents a flexible framework for such a multi-application dialogue system and an applicationindependent scheme for dialogue processing. Nöth et al. describe lessons learnt from the implementation of three commercially deployed conversational interfaces. The authors propose five guidelines, which they consider to be crucial, when building and operating telephone-based dialogue systems. One of the guidelines concerns the fact that a spoken dialogue system must react fast to any kind of user input, no matter", "title": "" }, { "docid": "a3add1c3190decbc773e0d45a0563cab", "text": "Despite the relatively recent emergence of the Unified Theory of Acceptance and Use of Technology (UTAUT), the originating article has already been cited by a large number of studies, and hence it appears to have become a popular theoretical choice within the field of information system (IS)/information technology (IT) adoption and diffusion. However, as yet there have been no attempts to analyse the reasons for citing the originating article. Such a systematic review of citations may inform researchers and guide appropriate future use of the theory. This paper therefore presents the results of a systematic review of 450 citations of the originating article in an attempt to better understand the reasons for citation, use and adaptations of the theory. Findings revealed that although a large number of studies have cited the originating article since its appearance, only 43 actually utilised the theory or its constructs in their empirical research for examining IS/IT related issues. This chapter also classifies and discusses these citations and explores the limitations of UTAUT use in existing research.", "title": "" } ]
scidocsrr
0a2096a566e42809934403552bb29697
A Novel Robot Fish With Wire-Driven Active Body and Compliant Tail
[ { "docid": "fd9992b50e6d58afab53954eac400b84", "text": "Several physico-mechanical designs evolved in fish are currently inspiring robotic devices for propulsion and manoeuvring purposes in underwater vehicles. Considering the potential benefits involved, this paper presents an overview of the swimming mechanisms employed by fish. The motivation is to provide a relevant and useful introduction to the existing literature for engineers with an interest in the emerging area of aquatic biomechanisms. The fish swimming types are presented following the well-established classification scheme and nomenclature originally proposed by Breder. Fish swim either by Body and/or Caudal Fin (BCF) movements or using Median and/or Paired Fin (MPF) propulsion. The latter is generally employed at slow speeds, offering greater manoeuvrability and better propulsive efficiency, while BCF movements can achieve greater thrust and accelerations. For both BCF and MPF locomotion specific swimming modes are identified, based on the propulsor and the type of movements (oscillatory or undulatory) employed for thrust generation. Along with general descriptions and kinematic data, the analytical approaches developed to study each swimming mode are also introduced. Particular reference is made to lunate tail propulsion, undulating fins and labriform (oscillatory pectoral fin) swimming mechanisms, identified as having the greatest potential for exploitation in artificial systems. Index Terms marine animals, hydrodynamics, underwater vehicle propulsion, mobile robots, kinematics * Submitted as a regular paper to the IEEE Journal of Oceanic Engineering, March 1998. † Ocean Systems Laboratory, Dept. of Computing & Electrical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland, U.K. Tel: +(44) (0) 131 4513350. Fax: +(44) (0) 131 4513327. Email: dml@cee.hw.ac.uk ‡ Dept. of Mechanical & Chemical Engineering, Heriot-Watt University, Edinburgh EH14 4AS, Scotland,U.K. Review of Fish Swimming Modes for Aquatic Locomotion -2", "title": "" } ]
[ { "docid": "64b19ea30a17839944fbc81db7ab89ce", "text": "Kahoot, Quizizz, and Google Forms are learning technology opens for new ways of teaching in the classroom. The teachers' laptops connected to a video projector, access to wireless network and the students smartphones, tablets or laptops can be utilized to enhance the interaction between the teacher and students, as well as boost the students motivation, engagement and learning. This paper shows the results from investigating the effect of using Kahoot, Quizizz, and Google Forms in classroom on how the students' perception of concentration, engagement, enjoyment, perceived learning, motivation, and satisfaction. The results show that students learned something from doing the quiz via Kahoot, Quizizz and Google Forms. But, there are significant differences in the concentration, engagement, enjoyment, motivation, and satisfaction. Kahoot and Quizizz has presented a lot of positives over Google forms when used in the classroom.", "title": "" }, { "docid": "59655f76a875e189913029102ed8f77c", "text": "Metaphorical expressions are pervasive in natural language and pose a substantial challenge for computational semantics. The inherent compositionality of metaphor makes it an important test case for compositional distributional semantic models (CDSMs). This paper is the first to investigate whether metaphorical composition warrants a distinct treatment in the CDSM framework. We propose a method to learn metaphors as linear transformations in a vector space and find that, across a variety of semantic domains, explicitly modeling metaphor improves the resulting semantic representations. We then use these representations in a metaphor identification task, achieving a high performance of 0.82 in terms of F-score.", "title": "" }, { "docid": "877d7d467711e8cb0fd03a941c7dc9da", "text": "Film clips are widely utilized to elicit emotion in a variety of research studies. Normative ratings for scenes selected for these purposes support the idea that selected clips correspond to the intended target emotion, but studies reporting normative ratings are limited. Using an ethnically diverse sample of college undergraduates, selected clips were rated for intensity, discreteness, valence, and arousal. Variables hypothesized to affect the perception of stimuli (i.e., gender, race-ethnicity, and familiarity) were also examined. Our analyses generally indicated that males reacted strongly to positively valenced film clips, whereas females reacted more strongly to negatively valenced film clips. Caucasian participants tended to react more strongly to the film clips, and we found some variation by race-ethnicity across target emotions. Finally, familiarity with the films tended to produce higher ratings for positively valenced film clips, and lower ratings for negatively valenced film clips. These findings provide normative ratings for a useful set of film clips for the study of emotion, and they underscore factors to be considered in research that utilizes scenes from film for emotion elicitation.", "title": "" }, { "docid": "934160b33f99886f9a72d0b871054101", "text": "One of the common endeavours in engineering applications is outlier detection, which aims to identify inconsistent records from large amounts of data. Although outlier detection schemes in data mining discipline are acknowledged as a more viable solution to efficient identification of anomalies from these data repository, current outlier mining algorithms require the input of domain parameters. These parameters are often unknown, difficult to determine and vary across different datasets containing different cluster features. This paper presents a novel resolution-based outlier notion and a nonparametric outlier-mining algorithm, which can efficiently identify and rank top listed outliers from a wide variety of datasets. The algorithm generates reasonable outlier results by taking both local and global features of a dataset into account. Experiments are conducted using both synthetic datasets and a real life construction equipment dataset from a large road building contractor. Comparison with the current outlier mining algorithms indicates that the proposed algorithm is more effective and can be integrated into a decision support system to serve as a universal detector of potentially inconsistent records.", "title": "" }, { "docid": "dd54483344a58ec7822237d1a222d67e", "text": "It is widely recognized that the risk of fractures is closely related to the typical decline in bone mass during the ageing process in both women and men. Exercise has been reported as one of the best non-pharmacological ways to improve bone mass throughout life. However, not all exercise regimens have the same positive effects on bone mass, and the studies that have evaluated the role of exercise programmes on bone-related variables in elderly people have obtained inconclusive results. This systematic review aims to summarize and update present knowledge about the effects of different types of training programmes on bone mass in older adults and elderly people as a starting point for developing future interventions that maintain a healthy bone mass and higher quality of life in people throughout their lifetime. A literature search using MEDLINE and the Cochrane Central Register of Controlled Trials databases was conducted and bibliographies for studies discussing the effect of exercise interventions in older adults published up to August 2011 were examined. Inclusion criteria were met by 59 controlled trials, 7 meta-analyses and 8 reviews. The studies included in this review indicate that bone-related variables can be increased, or at least the common decline in bone mass during ageing attenuated, through following specific training programmes. Walking provides a modest increase in the loads on the skeleton above gravity and, therefore, this type of exercise has proved to be less effective in osteoporosis prevention. Strength exercise seems to be a powerful stimulus to improve and maintain bone mass during the ageing process. Multi-component exercise programmes of strength, aerobic, high impact and/or weight-bearing training, as well as whole-body vibration (WBV) alone or in combination with exercise, may help to increase or at least prevent decline in bone mass with ageing, especially in postmenopausal women. This review provides, therefore, an overview of intervention studies involving training and bone measurements among older adults, especially postmenopausal women. Some novelties are that WBV training is a promising alternative to prevent bone fractures and osteoporosis. Because this type of exercise under prescription is potentially safe, it may be considered as a low impact alternative to current methods combating bone deterioration. In other respects, the ability of peripheral quantitative computed tomography (pQCT) to assess bone strength and geometric properties may prove advantageous in evaluating the effects of training on bone health. As a result of changes in bone mass becoming evident by pQCT even when dual energy X-ray absortiometry (DXA) measurements were unremarkable, pQCT may provide new knowledge about the effects of exercise on bone that could not be elucidated by DXA. Future research is recommended including longest-term exercise training programmes, the addition of pQCT measurements to DXA scanners and more trials among men, including older participants.", "title": "" }, { "docid": "efb81d85abcf62f4f3747a58154c5144", "text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.", "title": "" }, { "docid": "fcb2578b97105162326c0260b6f256e9", "text": "Journalism has enjoyed a rich and relatively stable history of professionalization. Scholars coming from a variety of disciplines have theorized this history, forming a consistent body of knowledge codified in national and international handbooks and canonical readers. However, recent work and analysis suggest that the supposed core of journalism and the assumed consistency of the inner workings of news organizations are problematic starting points for journalism studies. In this article, we challenge the consensual (self-)presentation of journalism - in terms of its occupational ideology, its professional culture, and its sedimentation in routines and organizational structures (cf. the newsroom) in the context of its reconfiguration as a post-industrial, entrepreneurial, and atypical way of working and of being at work. We outline a way beyond individualist or institutional approaches to do justice to the current complex transformation of the profession. We propose a framework to bring together these approaches in a dialectic attempt to move through and beyond journalism as it has traditionally been conceptualized and practiced, allowing for a broader definition and understanding of the myriad of practices that make up journalism.", "title": "" }, { "docid": "92dabad10ff49f307138e0738d8ebd50", "text": "Traffic forecasting is an important task which is required by overload warning and capacity planning for mobile networks. Based on analysis of real data collected by China Mobile Communications Corporation (CMCC) Heilongjiang Co. Ltd, this paper proposes to use the multiplicative seasonal ARIMA models for mobile communication traffic forecasting. Experiments and test results show that the whole solution presented in this paper is feasible and effective to fulfill the requirements in traffic forecasting application for mobile networks.", "title": "" }, { "docid": "077f9a831b5adbff1e809df01428a197", "text": "In this paper, we simulated and analyzed about through-via's signal integrity, (SI)/power integrity, and (PI)/electromagnetic interference (EMI) that goes through the power/ground plane which was caused by the high dielectric material that supports the embedded high value capacitors. In order to evaluate through-via's effectiveness, the simulation condition was operated on the LTCC module for mixed signal system. For the circumstance SI, delay time of signal line and signal quality significantly decrease because of higher parasitic capacitance between through-via's and anti-pads. However, in a situation where the dielectric material is chosen, the EMI's characteristic power/ground plan with embedded high dielectric material shows a better characteristic than when the low dielectric material was chosen. As a result, if the high dielectric material is applied on LTCC module, the mixed module packaging that is made with the digital IC and RF component will be realized as the optimistic design. The simulation structure takes the LTCC process designer guidebook as a basic structure and uses the HFSS/designer tool. When the dielectric constant uses 7.8 and 500, the through-via's that pass through the LTCC module are delay time of 41.4 psec and 56, respectively. When the dielectric constant of 500 is compared with 7.8, the power/ground plane impedance shows a trait lower than several GHz range and effectiveness in the rejection of the resonance mode. When uses the dielectric constant is 500, the EMI level is 7.8 and it is prove that the EMI level improves at maximum 20 dB V/m.", "title": "" }, { "docid": "75ba12682f959e53ec640d3b9da9a4e5", "text": "Methods for testing and analyzing agent-basedmodels have drawn increasing attention in the literature, in thecontextof e orts toestablish standard frameworks for thedevelopmentanddocumentationofmodels. This process can benefit from the use of established so ware environments for data analysis and visualization. For instance, the popular NetLogo agent-based modelling so ware can be interfaced with Mathematica and R, letting modellers use the advanced analysis capabilities available in these programming languages. To extend these capabilities to anadditional user base, this paper presents thepyNetLogo connector, which allows NetLogo to be controlled from the Python general-purpose programming language. Given Python’s increasing popularity for scientific computing, this provides additional flexibility for modellers and analysts. PyNetLogo’s features are demonstrated by controlling one of NetLogo’s example models from an interactive Python environment, then performing a global sensitivity analysis with parallel processing.", "title": "" }, { "docid": "113373d6a9936e192e5c3ad016146777", "text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.", "title": "" }, { "docid": "7282b16c6a433c318a93e270125777ff", "text": "Background: Tooth extraction is associated with dimensional changes in the alveolar ridge. The aim was to examine the effect of single versus contiguous teeth extractions on the alveolar ridge remodeling. Material and Methods: Five female beagle dogs were randomly divided into three groups on the basis of location (anterior or posterior) and number of teeth extracted – exctraction socket classification: group 1 (one dog): single-tooth extraction; group 2 (two dogs): extraction of two teeth; and group 3 (two dogs): extraction of three teeth in four anterior sites and four posterior sites in both jaws. The dogs were sacrificed after 4 months. Sagittal sectioning of each extraction site was performed and evaluated using microcomputed tomography. Results: Buccolingual or palatal bone loss was observed 4 months after extraction in all three groups. The mean of the alveolar ridge width loss in group 1 (single-tooth extraction) was significantly less than those in groups 2 and 3 (p < .001) (multiple teeth extraction). Three-teeth extraction (group 3) had significantly more alveolar bone loss than two-teeth extraction (group 2) (p < .001). The three-teeth extraction group in the upper and lower showed more obvious resorption on the palatal/lingual side especially in the lower group posterior locations. Conclusion: Contiguous teeth extraction caused significantly more alveolar ridge bone loss as compared with when a single tooth is extracted.", "title": "" }, { "docid": "89322e0d2b3566aeb85eeee9f505d5b2", "text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.", "title": "" }, { "docid": "3854ead43024ebc6ac942369a7381d71", "text": "During the past two decades, the prevalence of obesity in children has risen greatly worldwide. Obesity in childhood causes a wide range of serious complications, and increases the risk of premature illness and death later in life, raising public-health concerns. Results of research have provided new insights into the physiological basis of bodyweight regulation. However, treatment for childhood obesity remains largely ineffective. In view of its rapid development in genetically stable populations, the childhood obesity epidemic can be primarily attributed to adverse environmental factors for which straightforward, if politically difficult, solutions exist.", "title": "" }, { "docid": "ff4e424697cf6b400e84e17fc5b1c84f", "text": "Current static analysis techniques for Android applications operate at the Java level—that is, they analyze either the Java source code or the Dalvik bytecode. However, Android allows developers to write code in C or C++ that is cross-compiled to multiple binary architectures. Furthermore, the Java-written components and the native code components (C or C++) can interact. Native code can access all of the Android APIs that the Java code can access, as well as alter the Dalvik Virtual Machine, thus rendering static analysis techniques for Java unsound or misleading. In addition, malicious apps frequently hide their malicious functionality in native code or use native code to launch kernel exploits. It is because of these security concerns that previous research has proposed native code sandboxing, as well as mechanisms to enforce security policies in the sandbox. However, it is not clear whether the large-scale adoption of these mechanisms is practical: is it possible to define a meaningful security policy that can be imposed by a native code sandbox without breaking app functionality? In this paper, we perform an extensive analysis of the native code usage in 1.2 million Android apps. We first used static analysis to identify a set of 446k apps potentially using native code, and we then analyzed this set using dynamic analysis. This analysis demonstrates that sandboxing native code with no permissions is not ideal, as apps’ native code components perform activities that require Android permissions. However, our analysis provided very encouraging insights that make us believe that sandboxing native code can be feasible and useful in practice. In fact, it was possible to automatically generate a native code sandboxing policy, which is derived from our analysis, that limits many malicious behaviors while still allowing the correct execution of the behavior witnessed during dynamic analysis for 99.77% of the benign apps in our dataset. The usage of our system to generate policies would reduce the attack surface available to native code and, as a further benefit, it would also enable more reliable static analysis of Java code.", "title": "" }, { "docid": "d80070cf7ab3d3e75c2da1525e59be67", "text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.", "title": "" }, { "docid": "974d7b697942a8872b01d7b5d2302750", "text": "Purpose – This study provides insights into corporate achievements in supply chain management (SCM) and logistics management and details how they might help disaster agencies. The authors highlight and identify current practices, particularities, and challenges in disaster relief supply chains. Design/methodology/approach – Both SCM and logistics management literature and examples drawn from real-life cases inform the development of the theoretical model. Findings – The theoretical, dual-cycle model that focuses on the key missions of disaster relief agencies: first, prevention and planning and, second, response and recovery. Three major contributions are offered: (1) a concise representation of current practices and particularities of disaster relief supply chains compared with commercial SCM; (2) challenges and barriers to the development of more efficient SCM practices, classified into learning, strategizing, and coordinating and measurement issues; and (3) a simple, functional model for understanding how collaborations between corporations and disaster relief agencies might help relief agencies meet SCM challenges. Research limitations/implications – The study does not address culture clash–related considerations. Rather than representing the entire scope of real-life situations and practices, the analysis relies on key assumptions to help conceptualize collaborative paths.", "title": "" }, { "docid": "1efeab8c3036ad5ec1b4dc63a857b392", "text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "94ea8b56e8ade27c15e8603606003874", "text": "Mistry A, et al. Arch Dis Child Educ Pract Ed 2017;0:1–3. doi:10.1136/archdischild-2017-312905 A woman was admitted for planned induction at 39+5 weeks gestation. This was her third pregnancy. She had two previous children who were fit and well. Antenatal scans showed a fetal intra-abdominal mass measuring 6.2×5.5×7 cm in the lower abdomen, which was compressing the bladder. The mass was thought to be originating from the ovary or the bowel. On postnatal examination, the baby girl had a distended and full abdomen. There was a right-sided abdominal mass palpable above the umbilicus and 3 cm in size. It was firm, smooth and mobile in consistency. She had a normal anus and external female genitalia, with evidence of a prolapsed vagina on crying. She had passed urine and opened her bowels. The baby was kept nil by mouth and on intravenous fluids until the abdominal radiography was performed. The image is shown in figure 1.", "title": "" } ]
scidocsrr
9114f78d3e27846180c1b251d02a9610
Performance of Neural Network Image Classification on Mobile CPU and GPU
[ { "docid": "ae5fac207e5d3bf51bffbf2ec01fd976", "text": "Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.", "title": "" }, { "docid": "d2eacfccb44c7bd80def65b639643a74", "text": "Many mobile applications running on smartphones and wearable devices would potentially benefit from the accuracy and scalability of deep CNN-based machine learning algorithms. However, performance and energy consumption limitations make the execution of such computationally intensive algorithms on mobile devices prohibitive. We present a GPU-accelerated library, dubbed CNNdroid [1], for execution of trained deep CNNs on Android-based mobile devices. Empirical evaluations show that CNNdroid achieves up to 60X speedup and 130X energy saving on current mobile devices. The CNNdroid open source library is available for download at https://github.com/ENCP/CNNdroid", "title": "" }, { "docid": "dd5f9767c434c567e4c5948473b36958", "text": "The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost.", "title": "" } ]
[ { "docid": "35dd432f881acb83d6f6a362d565b7aa", "text": "Multi-tenant database is a new cloud computing paradigm that has recently attracted attention to deliver database functionalities for multiple tenants to create, store, and access their databases over the internet. This multi-tenant database should be highly configurable and secure to meet tenants' expectations and their different business requirements. In this paper, we propose an architecture design to build an intermediate database layer to be used between software applications and Relational Database Management Systems (RDBMS) to store and access multiple tenants' data in the Elastic Extension Table (EET) multi-tenant database schema. This database layer combines multi-tenant relational tables and virtual relational tables and makes them work together to act as one database for each tenant. This architecture design is suitable for multi-tenant database environment that can run any business domain database by using a combination of a database schema, which contains shared physical structured tables and virtual structured tenant's tables. Further, this multi-tenant database architecture design can be used as a base to build software applications in general and Software as a Service (SaaS) applications in particular.", "title": "" }, { "docid": "ff933c57886cfb4ab74b9cbd9e4f3a58", "text": "Many systems, applications, and features that support cooperative work share two characteristics: A significant investment has been made in their development, and their successes have consistently fallen far short of expectations. Examination of several application areas reveals a common dynamic: 1) A factor contributing to the application’s failure is the disparity between those who will benefit from an application and those who must do additional work to support it. 2) A factor contributing to the decision-making failure that leads to ill-fated development efforts is the unique lack of management intuition for CSCW applications. 3) A factor contributing to the failure to learn from experience is the extreme difficulty of evaluating these applications. These three problem areas escape adequate notice due to two natural but ultimately misleading analogies: the analogy between multi-user application programs and multi-user computer systems, and the analogy between multi-user applications and single-user applications. These analogies influence the way we think about cooperative work applications and designers and decision-makers fail to recognize their limits. Several CSCW application areas are examined in some detail. Introduction. An illustrative example: automatic meeting", "title": "" }, { "docid": "fe513114c9c78c546ae7018ff84f9cab", "text": "Three-dimensional geometric morphometric (3DGM) methods for placing landmarks on digitized bones have become increasingly sophisticated in the last 20 years, including greater degrees of automation. One aspect shared by all 3DGM methods is that the researcher must designate initial landmarks. Thus, researcher interpretations of homology and correspondence are required for and influence representations of shape. We present an algorithm allowing fully automatic placement of correspondence points on samples of 3D digital models representing bones of different individuals/species, which can then be input into standard 3DGM software and analyzed with dimension reduction techniques. We test this algorithm against several samples, primarily a dataset of 106 primate calcanei represented by 1,024 correspondence points per bone. Results of our automated analysis of these samples are compared to a published study using a traditional 3DGM approach with 27 landmarks on each bone. Data were analyzed with morphologika(2.5) and PAST. Our analyses returned strong correlations between principal component scores, similar variance partitioning among components, and similarities between the shape spaces generated by the automatic and traditional methods. While cluster analyses of both automatically generated and traditional datasets produced broadly similar patterns, there were also differences. Overall these results suggest to us that automatic quantifications can lead to shape spaces that are as meaningful as those based on observer landmarks, thereby presenting potential to save time in data collection, increase completeness of morphological quantification, eliminate observer error, and allow comparisons of shape diversity between different types of bones. We provide an R package for implementing this analysis.", "title": "" }, { "docid": "edc924ce81cc5a0292728f39ae2cab0d", "text": "a r t i c l e i n f o Keywords: Consumer–brand identification Consumer self-identity Brand relationships Product category involvement The concept of consumer–brand identification (CBI) is central to our understanding of how, when, and why brands help consumers articulate their identities. This paper proposes and tests an integrative theoretical framework of the antecedents of CBI. Six drivers of CBI, a moderator, and two consequences are posited and tested with survey data from a large sample of German household consumers. The results confirm the influence of five of the six drivers, namely, brand–self similarity, brand distinctiveness, brand social benefits, brand warmth, and memorable brand experiences. Further, we find that all five of these antecedents have stronger causal relationships with CBI when consumers have higher involvement with the brand's product category. Finally, CBI is tied to two important pro-company consequences, brand loyalty and brand advocacy. Theoretical and managerial significance of the findings are discussed. \" Choices are made more easily—either more routinely or more impulsively , seemingly—because one object is symbolically more harmonious with our goals, feelings, and self-definitions than another. \" Sidney J. Levy (1959, p. 120) \" Why has the Toyota Prius enjoyed such success … when most other hybrid models struggle to find buyers? One answer may be that buyers of the Prius want everyone to know they are driving a hybrid …. In fact, more than half the Prius buyers surveyed this spring … said the main reason they purchased their car was that 'it makes a statement about me.' \" Micheline Maynard (2007)", "title": "" }, { "docid": "e902cdc8d2e06d7dd325f734b0a289b6", "text": "Vaccinium arctostaphylos is a traditional medicinal plant in Iran used for the treatment of diabetes mellitus. In our search for antidiabetic compounds from natural sources, we found that the extract obtained from V. arctostaphylos berries showed an inhibitory effect on pancreatic alpha-amylase in vitro [IC50 = 1.91 (1.89-1.94) mg/mL]. The activity-guided purification of the extract led to the isolation of malvidin-3-O-beta-glucoside as an a-amylase inhibitor. The compound demonstrated a dose-dependent enzyme inihibitory activity [IC50 = 0.329 (0.316-0.342) mM].", "title": "" }, { "docid": "62e0f08dc9f0415cdff69e7c94c82a9a", "text": "In August 2012, the American Psychological Association (APA) Council of Representatives voted overwhelmingly to adopt as APA policy a Resolution on the Recognition of Psychotherapy Effectiveness. This invited article traces the origins and intentions of that resolution and its protracted journey through the APA governance labyrinth. We summarize the planned dissemination and projected results of the resolution and identify several lessons learned through the entire process.", "title": "" }, { "docid": "bb77f2d4b85aaaee15284ddf7f16fb18", "text": "We present a demonstration of WalkCompass, a system to appear in the MobiSys 2014 main conference. WalkCompass exploits smartphone sensors to estimate the direction in which a user is walking. We find that several smartphone localization systems in the recent past, including our own, make a simplifying assumption that the user's walking direction is known. In trying to relax this assumption, we were not able to find a generic solution from past work. While intuition suggests that the walking direction should be detectable through the accelerometer, in reality this direction gets blended into various other motion patterns during the act of walking, including up and down bounce, side-to-side sway, swing of arms or legs, etc. WalkCompass analyzes the human walking dynamics to estimate the dominating forces and uses this knowledge to find the heading direction of the pedestrian. In the demonstration we will show the performance of this system when the user holds the smartphone on the palm. A collection of YouTube videos of the demo is posted at http://synrg.csl.illinois.edu/projects/ localization/walkcompass.", "title": "" }, { "docid": "2d02bf71ee22e062d12ce4ec0b53d4c9", "text": "BACKGROUND\nTherapies that maintain remission for patients with Crohn's disease are essential. Stable remission rates have been demonstrated for up to 2 years in adalimumab-treated patients with moderately to severely active Crohn's disease enrolled in the CHARM and ADHERE clinical trials.\n\n\nAIM\nTo present the long-term efficacy and safety of adalimumab therapy through 4 years of treatment.\n\n\nMETHODS\nRemission (CDAI <150), response (CR-100) and corticosteroid-free remission over 4 years, and maintenance of these endpoints beyond 1 year were assessed in CHARM early responders randomised to adalimumab. Corticosteroid-free remission was also assessed in all adalimumab-randomised patients using corticosteroids at baseline. Fistula healing was assessed in adalimumab-randomised patients with fistula at baseline. As observed, last observation carried forward and a hybrid nonresponder imputation analysis for year 4 (hNRI) were used to report efficacy. Adverse events were reported for any patient receiving at least one dose of adalimumab.\n\n\nRESULTS\nOf 329 early responders randomised to adalimumab induction therapy, at least 30% achieved remission (99/329) or CR-100 (116/329) at year 4 of treatment (hNRI). The majority of patients (54%) with remission at year 1 maintained this endpoint at year 4 (hNRI). At year 4, 16% of patients taking corticosteroids at baseline were in corticosteroid-free remission and 24% of patients with fistulae at baseline had healed fistulae. The incidence rates of adverse events remained stable over time.\n\n\nCONCLUSIONS\nProlonged adalimumab therapy maintained clinical remission and response in patients with moderately to severely active Crohn's disease for up to 4 years. No increased risk of adverse events or new safety signals were identified with long-term maintenance therapy. (clinicaltrials.gov number: NCT00077779).", "title": "" }, { "docid": "0fe9c5d1872969dc11691d5021d242a2", "text": "Received: 2 February 2009 Revised: 8 September 2009 2nd Revision: 9 January 2010 3rd Revision: 2 March 2010 Accepted: 4 March 2010 Abstract Recent rapid advances in Information and Communication Technologies (ICTs) have highlighted the rising importance of the Business Model (BM) concept in the field of Information Systems (IS). Despite agreement on its importance to an organization’s success, the concept is still fuzzy and vague, and there is little consensus regarding its compositional facets. Identifying the fundamental concepts, modeling principles, practical functions, and reach of the BM relevant to IS and other business concepts is by no means complete. This paper, following a comprehensive review of the literature, principally employs the content analysis method and utilizes a deductive reasoning approach to provide a hierarchical taxonomy of the BM concepts from which to develop a more comprehensive framework. This framework comprises four fundamental aspects. First, it identifies four primary BM dimensions along with their constituent elements forming a complete ontological structure of the concept. Second, it cohesively organizes the BM modeling principles, that is, guidelines and features. Third, it explains the reach of the concept showing its interactions and intersections with strategy, business processes, and IS so as to place the BM within the world of digital business. Finally, the framework explores three major functions of BMs within digital organizations to shed light on the practical significance of the concept. Hence, this paper links the BM facets in a novel manner offering an intact definition. In doing so, this paper provides a unified conceptual framework for the BM concept that we argue is comprehensive and appropriate to the complex nature of businesses today. This leads to fruitful implications for theory and practice and also enables us to suggest a research agenda using our conceptual framework. European Journal of Information Systems (2010) 19, 359–376. doi:10.1057/ejis.2010.21; published online 11 May 2010", "title": "" }, { "docid": "a0a618a4c5e81dce26d095daea7668e2", "text": "We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.", "title": "" }, { "docid": "97ef62d13180ee6bb44ec28ff3b3d53e", "text": "Glioblastoma tumour cells release microvesicles (exosomes) containing mRNA, miRNA and angiogenic proteins. These microvesicles are taken up by normal host cells, such as brain microvascular endothelial cells. By incorporating an mRNA for a reporter protein into these microvesicles, we demonstrate that messages delivered by microvesicles are translated by recipient cells. These microvesicles are also enriched in angiogenic proteins and stimulate tubule formation by endothelial cells. Tumour-derived microvesicles therefore serve as a means of delivering genetic information and proteins to recipient cells in the tumour environment. Glioblastoma microvesicles also stimulated proliferation of a human glioma cell line, indicating a self-promoting aspect. Messenger RNA mutant/variants and miRNAs characteristic of gliomas could be detected in serum microvesicles of glioblastoma patients. The tumour-specific EGFRvIII was detected in serum microvesicles from 7 out of 25 glioblastoma patients. Thus, tumour-derived microvesicles may provide diagnostic information and aid in therapeutic decisions for cancer patients through a blood test.", "title": "" }, { "docid": "f79167ce151d9f9c73cf307d4cff7fe7", "text": "Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning. Many real life data sets contain a small amount of labelled data points, that are typically disregarded when training generative models. We propose the Cluster-aware Generative Model, that uses unlabelled information to infer a latent representation that models the natural clustering of the data, and additional labelled data points to refine this clustering. The generative performances of the model significantly improve when labelled information is exploited, obtaining a log-likelihood of−79.38 nats on permutation invariant MNIST, while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised, and still improve the log-likelihood performance with respect to related methods.", "title": "" }, { "docid": "d66799a5d65a6f23527a33b124812ea6", "text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.", "title": "" }, { "docid": "136a2f401b3af00f0f79b991ab65658f", "text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.", "title": "" }, { "docid": "71c7c98b55b2b2a9c475d4522310cfaa", "text": "This paper studies an active underground economy which spec ializes in the commoditization of activities such as credit car d fraud, identity theft, spamming, phishing, online credential the ft, and the sale of compromised hosts. Using a seven month trace of logs c ollected from an active underground market operating on publi c Internet chat networks, we measure how the shift from “hacking for fun” to “hacking for profit” has given birth to a societal subs trate mature enough to steal wealth into the millions of dollars in less than one year.", "title": "" }, { "docid": "5a011a87ce3f37dc6b944d2686fa2f73", "text": "Agents are self-contained objects within a software model that are capable of autonomously interacting with the environment and with other agents. Basing a model around agents (building an agent-based model, or ABM) allows the user to build complex models from the bottom up by specifying agent behaviors and the environment within which they operate. This is often a more natural perspective than the system-level perspective required of other modeling paradigms, and it allows greater flexibility to use agents in novel applications. This flexibility makes them ideal as virtual laboratories and testbeds, particularly in the social sciences where direct experimentation may be infeasible or unethical. ABMs have been applied successfully in a broad variety of areas, including heuristic search methods, social science models, combat modeling, and supply chains. This tutorial provides an introduction to tools and resources for prospective modelers, and illustrates ABM flexibility with a basic war-gaming example.", "title": "" }, { "docid": "0329376a86d45545c710b13c3ab30234", "text": "Over the past 20 years, neuroimaging has become a predominant technique in systems neuroscience. One might envisage that over the next 20 years the neuroimaging of distributed processing and connectivity will play a major role in disclosing the brain's functional architecture and operational principles. The inception of this journal has been foreshadowed by an ever-increasing number of publications on functional connectivity, causal modeling, connectomics, and multivariate analyses of distributed patterns of brain responses. I accepted the invitation to write this review with great pleasure and hope to celebrate and critique the achievements to date, while addressing the challenges ahead.", "title": "" }, { "docid": "bab06ca527f4a56eff82ef486ac7d728", "text": "The meaning of a sentence is a function of the relations that hold between its words. We instantiate this relational view of semantics in a series of neural models based on variants of relation networks (RNs) which represent a set of objects (for us, words forming a sentence) in terms of representations of pairs of objects. We propose two extensions to the basic RN model for natural language. First, building on the intuition that not all word pairs are equally informative about the meaning of a sentence, we use constraints based on both supervised and unsupervised dependency syntax to control which relations influence the representation. Second, since higher-order relations are poorly captured by a sum of pairwise relations, we use a recurrent extension of RNs to propagate information so as to form representations of higher order relations. Experiments on sentence classification, sentence pair classification, and machine translation reveal that, while basic RNs are only modestly effective for sentence representation, recurrent RNs with latent syntax are a reliably powerful representational device.", "title": "" }, { "docid": "0c41de0df5dd88c87061c57ae26c5b32", "text": "Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundancies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant artifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.", "title": "" }, { "docid": "ad059332e36849857c9bf1a52d5b0255", "text": "Interaction Design Beyond Human Computer Interaction instructions guide, service manual guide and maintenance manual guide for the products. Before employing this manual, service or maintenance guide you should know detail regarding your products cause this manual for expert only. We hope ford alternator wiring diagram internal regulator and yet another manual of these lists a good choice for your to repair, fix and solve your product or service or device problems don't try an oversight.", "title": "" } ]
scidocsrr
9d0293b2cea4ec24b44744ab04027342
Users' Awareness of Privacy on Online Social Networking Sites - Case Facebook
[ { "docid": "25196ef0c4385ec44b62183d9c282fc6", "text": "It is not well understood how privacy concern and trust influence social interactions within social networking sites. An online survey of two popular social networking sites, Facebook and MySpace, compared perceptions of trust and privacy concern, along with willingness to share information and develop new relationships. Members of both sites reported similar levels of privacy concern. Facebook members expressed significantly greater trust in both Facebook and its members, and were more willing to share identifying information. Even so, MySpace members reported significantly more experience using the site to meet new people. These results suggest that in online interaction, trust is not as necessary in the building of new relationships as it is in face to face encounters. They also show that in an online site, the existence of trust and the willingness to share information do not automatically translate into new social interaction. This study demonstrates online relationships can develop in sites where perceived trust and privacy safeguards are weak.", "title": "" }, { "docid": "39cc52cd5ba588e9d4799c3b68620f18", "text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.", "title": "" } ]
[ { "docid": "c7135d2633617dfb187112ea577d0685", "text": "Approximate Newton methods are a standard optimization tool which aim to maintain the benefits of Newton’s method, such as a fast rate of convergence, whilst alleviating its drawbacks, such as computationally expensive calculation or estimation of the inverse Hessian. In this work we investigate approximate Newton methods for policy optimization in Markov decision processes (MDPs). We first analyse the structure of the Hessian of the objective function for MDPs. We show that, like the gradient, the Hessian exhibits useful structure in the context of MDPs and we use this analysis to motivate two Gauss-Newton Methods for MDPs. Like the Gauss-Newton method for non-linear least squares, these methods involve approximating the Hessian by ignoring certain terms in the Hessian which are difficult to estimate. The approximate Hessians possess desirable properties, such as negative definiteness, and we demonstrate several important performance guarantees including guaranteed ascent directions, invariance to affine transformation of the parameter space, and convergence guarantees. We finally provide a unifying perspective of key policy search algorithms, demonstrating that our second Gauss-Newton algorithm is closely related to both the EM-algorithm and natural gradient ascent applied to MDPs, but performs significantly better in practice on a range of challenging domains.", "title": "" }, { "docid": "2ebb21cb1c6982d2d3839e2616cac839", "text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.", "title": "" }, { "docid": "089e1d2d96ae4ba94ac558b6cdccd510", "text": "HTTP Streaming is a recent topic in multimedia communications with on-going standardization activities, especially with the MPEG DASH standard which covers on demand and live services. One of the main issues in live services deployment is the reduction of the overall latency. Low or very low latency streaming is still a challenge. In this paper, we push the use of DASH to its limits with regards to latency, down to fragments being only one frame, and evaluate the overhead introduced by that approach and the combination of: low latency video coding techniques, in particular Gradual Decoding Refresh; low latency HTTP streaming, in particular using chunked-transfer encoding; and associated ISOBMF packaging. We experiment DASH streaming using these techniques in local networks to measure the actual end-to-end latency, as low as 240 milliseconds, for an encoding and packaging overhead in the order of 13% for HD sequences and thus validate the feasibility of very low latency DASH live streaming in local networks.", "title": "" }, { "docid": "add72d66c626f1a4df3e0820c629c75f", "text": "Cybersecurity is a complex and dynamic area where multiple actors act against each other through computer networks largely without any commonly accepted rules of engagement. Well-managed cybersecurity operations need a clear terminology to describe threats, attacks and their origins. In addition, cybersecurity tools and technologies need semantic models to be able to automatically identify threats and to predict and detect attacks. This paper reviews terminology and models of cybersecurity operations, and proposes approaches for semantic modelling of cybersecurity threats and attacks.", "title": "" }, { "docid": "58fda5b08ffe26440b173f363ca36292", "text": "The dependence on information technology became critical and IT infrastructure, critical data, intangible intellectual property are vulnerable to threats and attacks. Organizations install Intrusion Detection Systems (IDS) to alert suspicious traffic or activity. IDS generate a large number of alerts and most of them are false positive as the behavior construe for partial attack pattern or lack of environment knowledge. Monitoring and identifying risky alerts is a major concern to security administrator. The present work is to design an operational model for minimization of false positive alarms, including recurring alarms by security administrator. The architecture, design and performance of model in minimization of false positives in IDS are explored and the experimental results are presented with reference to lab environment.", "title": "" }, { "docid": "413d407b4e2727d18419c9537f2e556f", "text": "This paper describes the design of an automated triage and emergency management information system. The prototype system is capable of monitoring and assessing physiological parameters of individuals, transmitting pertinent medical data to and from multiple echelons of medical service, and providing filtered data for command and control applications. The system employs wireless networking, portable computing devices, and reliable messaging technology as a framework for information analysis, information movement, and decision support capabilities. The embedded medical model and physiological status assessment are based on input from humans and a pulse oximetry device. The physiological status determination methodology follows NATO defined guidelines for remote triage and is implemented using an approach based on fuzzy logic. The approach described can be used in both military and civilian", "title": "" }, { "docid": "2a00d77cb75767b3e4516ced59ea84f6", "text": "Men and women living in a rural community in Bakossiland, Cameroon were asked to rate the attractiveness of images of male or female figures manipulated to vary in somatotype, waist-to-hip ratio (WHR), secondary sexual traits, and other features. In Study 1, women rated mesomorphic (muscular) and average male somatotypes as most attractive, followed by ectomorphic (slim) and endomorphic (heavily built) figures. In Study 2, amount and distribution of masculine trunk (chest and abdominal) hair was altered progressively in a series of front-posed male figures. A significant preference for one of these images was found, but the most hirsute figure was not judged as most attractive. Study 3 assessed attractiveness of front-posed male figures which varied only in length of the non-erect penis. Extremes of penile size (smallest and largest of five images) were rated as significantly less attractive than three intermediate sizes. In Study 4, Bakossi men rated the attractiveness of back-posed female images varying in WHR (from 0.5-1.0). The 0.8 WHR figure was rated markedly more attractive than others. Study 5 rated the attractiveness of female skin color. Men expressed no consistent preference for either lighter or darker female figures. These results are the first of their kind reported for a Central African community and provide a useful cross-cultural perspective to published accounts on sexual selection, human morphology and attractiveness in the U.S., Europe, and elsewhere.", "title": "" }, { "docid": "3f90af944ed7603fa7bbe8780239116a", "text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.", "title": "" }, { "docid": "bb840b5097d2a186bae4fa2fd5904fe7", "text": "Electricity consumer dishonesty is a problem faced by all power utilities. Finding efficient measurements for detecting fraudulent electricity consumption has been an active research area in recent years. This paper presents a new approach towards Non-Technical Loss (NTL) analysis for electric utilities using a novel intelligence-based technique, Support Vector Machine (SVM). The main motivation of this study is to assist Tenaga Nasional Berhad (TNB) in Malaysia to reduce its NTLs in the distribution sector due to electricity theft. The proposed model preselects suspected customers to be inspected onsite for fraud based on irregularities and abnormal consumption behavior. This approach provides a method of data mining and involves feature extraction from historical customer consumption data. The SVM based approach uses customer load profile information to expose abnormal behavior that is known to be highly correlated with NTL activities. The result yields classification classes that are used to shortlist potential fraud suspects for onsite inspection, based on significant behavior that emerges due to irregularities in consumption. Simulation results prove the proposed method is more effective compared to the current actions taken by TNB in order to reduce NTL activities.", "title": "" }, { "docid": "c00470d69400066d11374539052f4a86", "text": "When individuals learn facts (e.g., foreign language vocabulary) over multiple study sessions, the temporal spacing of study has a significant impact on memory retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield lower cued-recall accuracy than intermediate intervals. Appropriate spacing of study can double retention on educationally relevant time scales. We introduce a Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM’s prediction is based on empirical data characterizing forgetting of the material following a single study session. MCM is a synthesis of two existing memory models (Staddon, Chelaru, & Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated and incompatible, but we show they share a core feature that allows them to be integrated. MCM can determine study schedules that maximize the durability of learning, and has implications for education and training. MCM can be cast either as a neural network with inputs that fluctuate over time, or as a cascade of leaky integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account for human declarative memory.", "title": "" }, { "docid": "db1b3a472b9d002cf8b901f96d20196b", "text": "Recent studies in NER use the supervised machine learning. This study used CRF as a learning algorithm, and applied word embedding to feature for NER training. Word embedding is helpful in many learning algorithms of NLP, indicating that words in a sentence are mapped by a real vector in a lowdimension space. As a result of comparing the performance of multiple techniques for word embedding to NER, it was found that CCA (85.96%) in Test A and Word2Vec (80.72%) in Test B exhibited the best performance.", "title": "" }, { "docid": "eb99d3fb9f6775453ac25861cb05f04c", "text": "Hate content in social media is ever increasing. While Facebook, Twitter, Google have attempted to take several steps to tackle this hate content, they most often risk the violation of freedom of speech. Counterspeech, on the other hand, provides an effective way of tackling the online hate without the loss of freedom of speech. Thus, an alternative strategy for these platforms could be to promote counterspeech as a defense against hate content. However, in order to have a successful promotion of such counterspeech, one has to have a deep understanding of its dynamics in the online world. Lack of carefully curated data largely inhibits such understanding. In this paper, we create and release the first ever dataset for counterspeech using comments from YouTube. The data contains 9438 manually annotated comments where the labels indicate whether a comment is a counterspeech or not. This data allows us to perform a rigorous measurement study characterizing the linguistic structure of counterspeech for the first time. This analysis results in various interesting insights such as: the counterspeech comments receive double the likes received by the non-counterspeech comments, for certain communities majority of the non-counterspeech comments tend to be hate speech, the different types of counterspeech are not all equally effective and the language choice of users posting counterspeech is largely different from those posting noncounterspeech as revealed by a detailed psycholinguistic analysis. Finally, we build a set of machine learning models that are able to automatically detect counterspeech in YouTube videos with an F1-score of 0.73.", "title": "" }, { "docid": "ed4dcf690914d0a16d2017409713ea5f", "text": "We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design 'is' and how it is related to HCI. First, three candidate accounts from design theory of what design 'is' are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.", "title": "" }, { "docid": "3165b876e7e1bcdccc261593235078f8", "text": "The next challenge of game AI lies in Real Time Strategy (RTS) games. RTS games provide partially observable gaming environments, where agents interact with one another in an action space much larger than that of GO. Mastering RTS games requires both strong macro strategies and delicate micro level execution. Recently, great progress has been made in micro level execution, while complete solutions for macro strategies are still lacking. In this paper, we propose a novel learning-based Hierarchical Macro Strategy model for mastering MOBA games, a sub-genre of RTS games. Trained by the Hierarchical Macro Strategy model, agents explicitly make macro strategy decisions and further guide their micro level execution. Moreover, each of the agents makes independent strategy decisions, while simultaneously communicating with the allies through leveraging a novel imitated crossagent communication mechanism. We perform comprehensive evaluations on a popular 5v5 Multiplayer Online Battle Arena (MOBA) game. Our 5-AI team achieves a 48% winning rate against human player teams which are ranked top 1% in the player ranking system.", "title": "" }, { "docid": "e9b438cfe853e98f05b661f9149c0408", "text": "Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential \"backfire\" effects.", "title": "" }, { "docid": "6e2fcb03490828649cc960d97c8de157", "text": "Scars, marks and tattoos (SMT) are being increasingly used for suspect and victim identification in forensics and law enforcement agencies. Tattoos, in particular, are getting serious attention because of their visual and demographic characteristics as well as their increasing prevalence. However, current tattoo matching procedure requires human-assigned class labels in the ANSI/NIST ITL 1-2000 standard which makes it time consuming and subjective with limited retrieval performance. Further, tattoo images are complex and often contain multiple objects with large intra-class variability, making it very difficult to assign a single category in the ANSI/NIST standard. We describe a content-based image retrieval (CBIR) system for matching and retrieving tattoo images. Based on scale invariant feature transform (SIFT) features extracted from tattoo images and optional accompanying demographical information, our system computes feature-based similarity between the query tattoo image and tattoos in the criminal database. Experimental results on two different tattoo databases show encouraging results.", "title": "" }, { "docid": "c7d17145605864aa28106c14954dcae5", "text": "Person re-identification (ReID) is to identify pedestrians observed from different camera views based on visual appearance. It is a challenging task due to large pose variations, complex background clutters and severe occlusions. Recently, human pose estimation by predicting joint locations was largely improved in accuracy. It is reasonable to use pose estimation results for handling pose variations and background clutters, and such attempts have obtained great improvement in ReID performance. However, we argue that the pose information was not well utilized and hasn't yet been fully exploited for person ReID. In this work, we introduce a novel framework called Attention-Aware Compositional Network (AACN) for person ReID. AACN consists of two main components: Pose-guided Part Attention (PPA) and Attention-aware Feature Composition (AFC). PPA is learned and applied to mask out undesirable background features in pedestrian feature maps. Furthermore, pose-guided visibility scores are estimated for body parts to deal with part occlusion in the proposed AFC module. Extensive experiments with ablation analysis show the effectiveness of our method, and state-of-the-art results are achieved on several public datasets, including Market-1501, CUHK03, CUHK01, SenseReID, CUHK03-NP and DukeMTMC-reID.", "title": "" }, { "docid": "fd18cb0cc94b336ff32b29e0f27363dc", "text": "We have developed a real-time algorithm for detection of the QRS complexes of ECG signals. It reliably recognizes QRS complexes based upon digital analyses of slope, amplitude, and width. A special digital bandpass filter reduces false detections caused by the various types of interference present in ECG signals. This filtering permits use of low thresholds, thereby increasing detection sensitivity. The algorithm automatically adjusts thresholds and parameters periodically to adapt to such ECG changes as QRS morphology and heart rate. For the standard 24 h MIT/BIH arrhythmia database, this algorithm correctly detects 99.3 percent of the QRS complexes.", "title": "" }, { "docid": "833786dcf2288f21343d60108819fe49", "text": "This paper describes an audio event detection system which automatically classifies an audio event as ambient noise, scream or gunshot. The classification system uses two parallel GMM classifiers for discriminating screams from noise and gunshots from noise. Each classifier is trained using different features, appropriately chosen from a set of 47 audio features, which are selected according to a 2-step process. First, feature subsets of increasing size are assembled using filter selection heuristics. Then, a classifier is trained and tested with each feature subset. The obtained classification performance is used to determine the optimal feature vector dimension. This allows a noticeable speed-up w.r.t. wrapper feature selection methods. In order to validate the proposed detection algorithm, we carried out extensive experiments on a rich set of gunshots and screams mixed with ambient noise at different SNRs. Our results demonstrate that the system is able to guarantee a precision of 90% at a false rejection rate of 8%.", "title": "" }, { "docid": "2524e651a08ce45419f760ebf269c0fc", "text": "Goal: Today's financial markets are of complex behavior which is the result of decisions made by many traders. Goal of this research is to calculate the relationship between financial markets stock prices, volumes, counts in financial news and tweets.\n Method: Collect the data sets for the three companies - Apple, Google and Sony\n 1. Collect tweets using Twitter API written in Python and extract tweet counts only related to stocks for the above companies.\n 2. Collect News data counts using News API, written in Python, only related to stocks for the above companies.\n 3. Collect stocks data including Volume, Close Price, etc. for the above companies.\n Findings: We find a positive correlation between the daily number of mentions of the above companies in the Tweets, News, daily stocks close prices and daily transactions volume of a company's stock after the tweets and news are released. Our results provide measurable support for the suggestion that activities in financial markets, news and tweets are fundamentally interlinked.", "title": "" } ]
scidocsrr
55b440d91df4c4ddafab74d83d314d1d
Depth Estimation Using Monocular and Stereo Cues
[ { "docid": "dd1b20766f2b8099b914c780fb8cc03c", "text": "Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.", "title": "" }, { "docid": "5497e6be671aa7b5f412590873b04602", "text": "Since the rst shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many di erent approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster. The implementation of these algorithms in C, and images used in this paper, are available by anonymous ftp under the pub=tech paper=survey directory at eustis:cs:ucf:edu (132.170.108.42). These are also part of the electronic version of paper.", "title": "" }, { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" } ]
[ { "docid": "233ee357b5785572f50b79d6dd936e7c", "text": "graph is a simple, powerful, elegant abstraction with broad applicability in computer science and many related fields. Algorithms that operate on graphs see heavy use in both theoretical and practical contexts. Graphs have a very natural visual representation as nodes and connecting links arranged in space. Seeing this structure explicitly can aid tasks in many domains. Many people automatically sketch such a picture when thinking about small graphs, often including simple annotations. The pervasiveness of visual representations of small graphs testifies to their usefulness. On the other hand, although many large data sets can be expressed as graphs, few such visual representations exist. What causes this discrepancy? For one thing, graph layout poses a hard problem, 1 one that current tools just can't overcome. Conventional systems often falter when handling hundreds of edges, and none can handle more than a few thousand edges. 2 However, nonvisual manipulation of graphs with 50,000 edges is commonplace , and much larger instances exist. We can consider the Web as an extreme example of a graph with many millions of nodes and edges. Although many individual Web sites stay quite small, a significant number have more than 20,000 documents. The Unix file system reachable from a single networked workstation might include more than 100,000 files scattered across dozens of gigabytes worth of remotely mounted disk drives. Computational complexity is not the only reason that software to visually manipulate large graphs has lagged behind software to computationally manipulate them. Many previous graph layout systems have focused on fine-tuning the layout of relatively small graphs in support of polished presentations. A graph drawing system that focuses on the interactive browsing of large graphs can instead target the quite different tasks of browsing and exploration. Many researchers in scientific visual-ization have recognized the split between explanatory and exploratory goals. This distinction proves equally relevant for graph drawing. Contribution This article briefly describes a software system that explicitly attempts to handle much larger graphs than previous systems and support dynamic exploration rather than final presentation. I'll then discuss the applicability of this system to goals beyond simple exploration. A software system that supports graph exploration should include both a layout and an interactive drawing component. I have developed new algorithms for both layout and drawing—H3 and H3Viewer. A paper from InfoVis 97 contains a more extensive presentation of the H3 layout algorithm. 3 The H3Viewer drawing algorithm remains …", "title": "" }, { "docid": "09c4b35650141dfaf6e945dd6460dcf6", "text": "H2 histamine receptors are localized postsynaptically in the CNS. The aim of this study was to evaluate the effects of acute (1 day) and prolonged (7 day) administration of the H2 histamine receptor antagonist, famotidine, on the anticonvulsant activity of conventional antiepileptic drugs (AEDs; valproate, carbamazepine, diphenylhydantoin and phenobarbital) against maximal electroshock (MES)-induced seizures in mice. In addition, the effects of these drugs alone or in combination with famotidine were studied on motor performance and long-term memory. The influence of H2 receptor antagonist on brain concentrations and free plasma levels of the antiepileptic drugs was also evaluated. After acute or prolonged administration of famotidine (at dose of 10mg/kg) the drug raised the threshold for electroconvulsions. No effect was observed on this parameter at lower doses. Famotidine (5mg/kg), given acutely, significantly enhanced the anticonvulsant activity of valproate, which was expressed by a decrease in ED50. After the 7-day treatment, famotidine (5mg/kg) increased the anticonvulsant activity of diphenylhydantoin against MES. Famotidine (5mg/kg), after acute and prolonged administration, combined with valproate, phenobarbital, diphenylhydantoin and carbamazepine did not alter their free plasma levels. In contrast, brain concentrations of valproate were elevated for 1-day treatment with famotidine (5mg/kg). Moreover, famotidine co-applied with AEDs, given prolonged, worsened motor coordination in mice treated with carbamazepine or diphenylhydantoin. In contrast this histamine antagonist, did not impair the performance of mice evaluated in the long-term memory task. The results of this study indicate that famotidine modifies the anticonvulsant activity of some antiepileptic drugs.", "title": "" }, { "docid": "d81d4bc4e8d2bfb0db1fd4141bf2191c", "text": "Anton 2 is a second-generation special-purpose supercomputer for molecular dynamics simulations that achieves significant gains in performance, programmability, and capacity compared to its predecessor, Anton 1. The architecture of Anton 2 is tailored for fine-grained event-driven operation, which improves performance by increasing the overlap of computation with communication, and also allows a wider range of algorithms to run efficiently, enabling many new software-based optimizations. A 512-node Anton 2 machine, currently in operation, is up to ten times faster than Anton 1 with the same number of nodes, greatly expanding the reach of all-atom biomolecular simulations. Anton 2 is the first platform to achieve simulation rates of multiple microseconds of physical time per day for systems with millions of atoms. Demonstrating strong scaling, the machine simulates a standard 23,558-atom benchmark system at a rate of 85 μs/day---180 times faster than any commodity hardware platform or general-purpose supercomputer.", "title": "" }, { "docid": "fd20f14df55653a30c8ea624f38a7dce", "text": "In this paper, a cooperative two-hop communication scheme, together with opportunistic relaying (OR), is applied within a mobile wireless body area network (WBAN). Its effectiveness in interference mitigation is investigated in a scenario where there are multiple closely-located networks. Due to a typical WBAN's nature, no coordination is used among different WBANs. A suitable time-division-multiple-access (TDMA) scheme is adopted as both an intra-network and also an internetwork access scheme. Extensive on-body and off-body channel gain measurements are employed to gauge performance, which are overlaid to simulate a realistic WBAN working environment. It is found that opportunistic relaying is able to improve the signal-to-interference-plus-noise ratio (SINR) performance at an outage probability of 10% by an average of 5 dB, and it is also shown that it can reduce level crossing rate (LCR) significantly at low SINRs. Furthermore, this scheme is more efficient when on-body channels fade more rapidly.", "title": "" }, { "docid": "d7f878ed79899f72d5d7bf58a7dcaa40", "text": "We report in detail the decoding strategy that we used for the past two Darpa Rich Transcription evaluations (RT’03 and RT’04) which is based on finite state automata (FSA). We discuss the format of the static decoding graphs, the particulars of our Viterbi implementation, the lattice generation and the likelihood evaluation. This paper is intended to familiarize the reader with some of the design issues encountered when building an FSA decoder. Experimental results are given on the EARS database (English conversational telephone speech) with emphasis on our faster than real-time system.", "title": "" }, { "docid": "563abf001fd70dd0027d333f01c5b36c", "text": "We have now confirmed the existence of > 1800 planets orbiting stars other than the Sun; known as extrasolar planets or exoplanets. The different methods for detecting such planets are sensitive to different regions of parameter space, and so, we are discovering a wide diversity of exoplanets and exoplanetary systems. Characterizing such planets is difficult, but we are starting to be able to determine something of their internal composition and are beginning to be able to probe their atmospheres, the first step towards the detection of bio-signatures and, hence, determining if a planet could be habitable or not. Here, I will review how we detect exoplanets, how we characterize exoplanetary systems and the exoplanets themselves, where we stand with respect to potentially habitable planets and how we are progressing towards being able to actually determine if a planet could host life or not.", "title": "" }, { "docid": "9c4c08f608438d4c7dabdcda9b8091f1", "text": "This paper describes a 32Mb SRAM that has been designed and fabricated in a 65nm low-power CMOS technology. The design has also been migrated to 45nm bulk and SOI technologies. The 68mm die features read and write-assist circuit techniques that expand the operating voltage range and improve manufacturability across technology platforms", "title": "" }, { "docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95", "text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.", "title": "" }, { "docid": "35d9bc68be9b46167f6463ad05e694a6", "text": "Recently, deep reinforcement learning (DRL) has been used for dialogue policy optimization. However, many DRL-based policies are not sample-efficient. Most recent advances focus on improving DRL optimization algorithms to address this issue. Here, we take an alternative route of designing neural network structure that is better suited for DRL-based dialogue management. The proposed structured deep reinforcement learning is based on graph neural networks (GNN), which consists of some sub-networks, each one for a node on a directed graph. The graph is defined according to the domain ontology and each node can be considered as a sub-agent. During decision making, these sub-agents have internal message exchange between neighbors on the graph. We also propose an approach to jointly optimize the graph structure as well as the parameters of GNN. Experiments show that structured DRL significantly outperforms previous state-of-the-art approaches in almost all of the 18 tasks of the PyDial benchmark.", "title": "" }, { "docid": "2a8c3676233cf1ae61fe91a7af3873d9", "text": "Rumination has attracted increasing theoretical and empirical interest in the past 15 years. Previous research has demonstrated significant relationships between rumination, depression, and metacognition. Two studies were conducted to further investigate these relationships and test the fit of a clinical metacognitive model of rumination and depression in samples of both depressed and nondepressed participants. In these studies, we collected cross-sectional data of rumination, depression, and metacognition. The relationships among variables were examined by testing the fit of structural equation models. In the study on depressed participants, a good model fit was obtained consistent with predictions. There were similarities and differences between the depressed and nondepressed samples in terms of relationships among metacognition, rumination, and depression. In each case, theoretically consistent paths between positive metacognitive beliefs, rumination, negative metacognitive beliefs, and depression were evident. The conceptual and clinical implications of these data are discussed.", "title": "" }, { "docid": "e11b6fd2dcec42e7b726363a869a0d95", "text": "Future frame prediction in videos is a promising avenue for unsupervised video representation learning. Video frames are naturally generated by the inherent pixel flows from preceding frames based on the appearance and motion dynamics in the video. However, existing methods focus on directly hallucinating pixel values, resulting in blurry predictions. In this paper, we develop a dual motion Generative Adversarial Net (GAN) architecture, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a duallearning mechanism. The primal future-frame prediction and dual future-flow prediction form a closed loop, generating informative feedback signals to each other for better video prediction. To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the futureflow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows. Our dual motion GAN also handles natural motion uncertainty in different pixel locations with a new probabilistic motion encoder, which is based on variational autoencoders. Extensive experiments demonstrate that the proposed dual motion GAN significantly outperforms stateof-the-art approaches on synthesizing new video frames and predicting future flows. Our model generalizes well across diverse visual scenes and shows superiority in unsupervised video representation learning.", "title": "" }, { "docid": "a58769ca02b9409a983ac6d7ba69f0be", "text": "In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers.", "title": "" }, { "docid": "a9c4f01cfdbdde6245d99a9c5056f83f", "text": "Brachyolmia (BO) is a heterogeneous group of skeletal dysplasias with skeletal changes limited to the spine or with minimal extraspinal features. BO is currently classified into types 1, 2, 3, and 4. BO types 1 and 4 are autosomal recessive conditions caused by PAPSS2 mutations, which may be merged together as an autosomal recessive BO (AR-BO). The clinical and radiological signs of AR-BO in late childhood have already been reported; however, the early manifestations and their age-dependent evolution have not been well documented. We report an affected boy with AR-BO, whose skeletal abnormalities were detected in utero and who was followed until 10 years of age. Prenatal ultrasound showed bowing of the legs. In infancy, radiographs showed moderate platyspondyly and dumbbell deformity of the tubular bones. Gradually, the platyspondyly became more pronounced, while the bowing of the legs and dumbbell deformities of the tubular bones diminished with age. In late childhood, the overall findings were consistent with known features of AR-BO. Genetic testing confirmed the diagnosis. Being aware of the initial skeletal changes may facilitate early diagnosis of PAPSS2-related skeletal dysplasias.", "title": "" }, { "docid": "de3f2ad88e3a99388975cc3da73e5039", "text": "Machine-learning techniques have recently been proved to be successful in various domains, especially in emerging commercial applications. As a set of machine-learning techniques, artificial neural networks (ANNs), requiring considerable amount of computation and memory, are one of the most popular algorithms and have been applied in a broad range of applications such as speech recognition, face identification, natural language processing, ect. Conventionally, as a straightforward way, conventional CPUs and GPUs are energy-inefficient due to their excessive effort for flexibility. According to the aforementioned situation, in recent years, many researchers have proposed a number of neural network accelerators to achieve high performance and low power consumption. Thus, the main purpose of this literature is to briefly review recent related works, as well as the DianNao-family accelerators. In summary, this review can serve as a reference for hardware researchers in the area of neural networks.", "title": "" }, { "docid": "bd2af30c9bc44b64d91bd4cde32ca45d", "text": "The oneM2M standard is a global initiative led jointly by major standards organizations around the world in order to develop a unique architecture for M2M communications. Prior standards, and also oneM2M, while focusing on achieving interoperability at the communication level, do not achieve interoperability at the semantic level. An expressive ontology for IoT called IoT-O is proposed, making best use of already defined ontologies in specific domains such as sensor, observation, service, quantity kind, units, or time. IoT-O also defines some missing concepts relevant for IoT such as thing, node, actuator, and actuation. The extension of the oneM2M standard to support semantic data interoperability based on IoT-O is discussed. Finally, through comprehensive use cases, benefits of the extended standard are demonstrated, ranging from heterogeneous device interoperability to autonomic behavior achieved by automated reasoning.", "title": "" }, { "docid": "3fb840309fcd22533cf86f57dbae22b5", "text": "Non-volatile RAM (NVRAM) makes it possible for data structures to tolerate transient failures, assuming however that programmers have designed these structures such that their consistency is preserved upon recovery. Previous approaches are typically transactional and inherently make heavy use of logging, resulting in implementations that are significantly slower than their DRAM counterparts. In this paper, we introduce a set of techniques aimed at lock-free data structures that, in the large majority of cases, remove the need for logging (and costly durable store instructions) both in the data structure algorithm and in the associated memory management scheme. Together, these generic techniques enable us to design what we call log-free concurrent data structures, which, as we illustrate on linked lists, hash tables, skip lists, and BSTs, can provide several-fold performance improvements over previous transaction-based implementations, with overheads of the order of milliseconds for recovery after a failure. We also highlight how our techniques can be integrated into practical systems, by presenting a durable version of Memcached that maintains the performance of its volatile counterpart.", "title": "" }, { "docid": "ae12d709da329eea3cc8e49c98c21518", "text": "This paper aims to explore how socialand self-factors may affect consumers’ brand loyalty while they follow companies’ microblogs. Drawing upon the commitment-trust theory, social influence theory, and self-congruence theory, we propose that network externalities, social norms, and self-congruence are the key determinants in the research model. The impacts of these factors on brand loyalty will be mediated by brand trust and brand commitment. We empirically test the model through an online survey on an existing microblogging site. The findings illustrate that network externalities and self-congruence can positively affect brand trust, which subsequently leads to brand commitment and brand loyalty. Meanwhile, social norms, together with self-congruence, directly posit influence on brand commitment. Brand commitment is then positively associated with brand loyalty. We believe that the findings of this research can contribute to the literature. We offer new insights regarding how consumers’ brand loyalty develops from the two social-factors and their self-congruence with the brand. Company managers could also apply our findings to strengthen their relationship marketing with consumers on microblogging sites.", "title": "" }, { "docid": "1986b84084202aaf3b6aee4df9fea8e2", "text": "Electronic marketplaces (EMs) are an important empirical phenomenon, because they are theoretically linked to significant economic and business effects. Different types of EMs have been identified; further, some researchers link different EM types with different impacts. Because the effects of EMs may vary with types, classifying and identifying the characteristics of EM types are fundamental to sound research. Some prior approaches to EM classification have been based on empirical observations, others have been theoretically motivated; each has strengths and limitations. This paper presents a third approach: surfacing strategic archetypes. The strategic archetypes approach has the empirical fidelity associated with the large numbers of attributes considered in the empirical classification approach, but the parsimony of types and the theoretical linkages associated with the theoretical classification approach. The strategic archetypes approach seeks a manageable number of EM configuration types in which the attributes are theoretically linked to each other and to hypothesized outcomes like performance and impacts. The strategic archetypes approach has the potential to inform future theoretical and empirical investigations of electronic marketplaces and to translate research findings into successful recommendations for practice.", "title": "" }, { "docid": "074624b6db03cca1e83e9c40679ce62b", "text": "In this project a human robot interaction system was developed in order to let people naturally play rock-paper-scissors games against a smart robotic opponent. The robot does not perform random choices, the system is able to analyze the previous rounds trying to forecast the next move. A Machine Learning algorithm based on Gaussian Mixture Model (GMM) allows us to increase the percentage of robot victories. This is a very important aspect in the natural interaction between human and robot, in fact, people do not like playing against “stupid” machines, while they are stimulated in confronting with a skilled opponent.", "title": "" }, { "docid": "7db1b370d0e14e80343cbc7718bbb6c9", "text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.", "title": "" } ]
scidocsrr
95c5f3114d87c1ab4a1e9a472bb0b077
Generic Object Detection With Dense Neural Patterns and Regionlets
[ { "docid": "28fd803428e8f40a4627e05a9464e97b", "text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.", "title": "" }, { "docid": "5b0e088e2bddd0535bc9d2dfbfeb0298", "text": "We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "1a5c79a9f2c22681dc558876d5b358e5", "text": "An evolution-based framework for understanding biological and cultural influences on children's cognitive and academic development is presented. The utility of this framework is illustrated within the mathematical domain and serves as a foundation for examining current approaches to educational reform in the United States. Within this framework, there are two general classes of cognitive ability, biologically primary and biologically secondary. Biologically primary cognitive abilities appear to have evolved largely by means of natural or sexual selection. Biologically secondary cognitive abilities reflect the co-optation of primary abilities for purposes other than the original evolution-based function and appear to develop only in specific cultural contexts. A distinction between these classes of ability has important implications for understanding children's cognitive development and achievement.", "title": "" }, { "docid": "5528b738695f6ff0ac17f07178a7e602", "text": "Multiple genetic pathways act in response to developmental cues and environmental signals to promote the floral transition, by regulating several floral pathway integrators. These include FLOWERING LOCUS T (FT) and SUPPRESSOR OF OVEREXPRESSION OF CONSTANS 1 (SOC1). We show that the flowering repressor SHORT VEGETATIVE PHASE (SVP) is controlled by the autonomous, thermosensory, and gibberellin pathways, and directly represses SOC1 transcription in the shoot apex and leaf. Moreover, FT expression in the leaf is also modulated by SVP. SVP protein associates with the promoter regions of SOC1 and FT, where another potent repressor FLOWERING LOCUS C (FLC) binds. SVP consistently interacts with FLC in vivo during vegetative growth and their function is mutually dependent. Our findings suggest that SVP is another central regulator of the flowering regulatory network, and that the interaction between SVP and FLC mediated by various flowering genetic pathways governs the integration of flowering signals.", "title": "" }, { "docid": "0f92bd13b589f0f5328620681547b3ea", "text": "By integrating the perspectives of social presence, interactivity, and peer motivation, this study developed a theoretical model to examine the factors affecting members' purchase intention in the context of social media brand community. Data collected from members of a fan page brand community on Facebook in Taiwan was used to test the model. The results also show that peer extrinsic motivation and peer intrinsic motivation have positive influences on purchase intention. The results also reveal that human-message interaction exerts significant influence on peer extrinsic motivation and peer intrinsic motivation, while human-human interaction has a positive effect on human-message interaction. Finally, the results report that awareness impacts human-message interaction significantly, whereas awareness, affective social presence, and cognitive social presence influence human-human interaction significantly.", "title": "" }, { "docid": "955bd83f9135336d9c5d887065d31f04", "text": "Current dialogue systems focus more on textual and speech context knowledge and are usually based on two speakers. Some recent work has investigated static image-based dialogue. However, several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. To move closer towards such multimodal conversational skills and visually-situated applications, we introduce a new video-context, many-speaker dialogue dataset based on livebroadcast soccer game videos and chats from Twitch.tv. This challenging testbed allows us to develop visually-grounded dialogue models that should generate relevant temporal and spatial event language from the live video, while also being relevant to the chat history. For strong baselines, we also present several discriminative and generative models, e.g., based on tridirectional attention flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic phrasematching metrics, as well as human evaluation studies. We also present dataset analyses, model ablations, and visualizations to understand the contribution of different modalities and model components.", "title": "" }, { "docid": "6d9a9c9903cc358f6bb8c8e5fdf7d231", "text": "A video copy detection system that is based on content fingerprinting and can be used for video indexing and copyright applications is proposed. The system relies on a fingerprint extraction algorithm followed by a fast approximate search algorithm. The fingerprint extraction algorithm extracts compact content-based signatures from special images constructed from the video. Each such image represents a short segment of the video and contains temporal as well as spatial information about the video segment. These images are denoted by temporally informative representative images. To find whether a query video (or a part of it) is copied from a video in a video database, the fingerprints of all the videos in the database are extracted and stored in advance. The search algorithm searches the stored fingerprints to find close enough matches for the fingerprints of the query video. The proposed fast approximate search algorithm facilitates the online application of the system to a large video database of tens of millions of fingerprints, so that a match (if it exists) is found in a few seconds. The proposed system is tested on a database of 200 videos in the presence of different types of distortions such as noise, changes in brightness/contrast, frame loss, shift, rotation, and time shift. It yields a high average true positive rate of 97.6% and a low average false positive rate of 1.0%. These results emphasize the robustness and discrimination properties of the proposed copy detection system. As security of a fingerprinting system is important for certain applications such as copyright protections, a secure version of the system is also presented.", "title": "" }, { "docid": "3db1505c98ecb39ad11374d1a7a13ca3", "text": "Distributed Denial-of-Service (DDoS) attacks are usually launched through the botnet, an “army” of compromised nodes hidden in the network. Inferential tools for DDoS mitigation should accordingly enable an early and reliable discrimination of the normal users from the compromised ones. Unfortunately, the recent emergence of attacks performed at the application layer has multiplied the number of possibilities that a botnet can exploit to conceal its malicious activities. New challenges arise, which cannot be addressed by simply borrowing the tools that have been successfully applied so far to earlier DDoS paradigms. In this paper, we offer basically three contributions: 1) we introduce an abstract model for the aforementioned class of attacks, where the botnet emulates normal traffic by continually learning admissible patterns from the environment; 2) we devise an inference algorithm that is shown to provide a consistent (i.e., converging to the true solution as time elapses) estimate of the botnet possibly hidden in the network; and 3) we verify the validity of the proposed inferential strategy on a test-bed environment. Our tests show that, for several scenarios of implementation, the proposed botnet identification algorithm needs an observation time in the order of (or even less than) 1 min to identify correctly almost all bots, without affecting the normal users’ activity.", "title": "" }, { "docid": "0472166a123f56606cd84a65bab89ce4", "text": "How can we automatically identify the topics of microblog posts? This question has received substantial attention in the research community and has led to the development of different topic models, which are mathematically well-founded statistical models that enable the discovery of topics in document collections. Such models can be used for topic analyses according to the interests of user groups, time, geographical locations, or social behavior patterns. The increasing availability of microblog posts with associated users, textual content, timestamps, geo-locations, and user behaviors, offers an opportunity to study space-time dependent behavioral topics. Such a topic is described by a set of words, the distribution of which varies according to the time, geo-location, and behaviors (that capture how a user interacts with other users by using functionality such as reply or re-tweet) of users. This study jointly models user topic interest and behaviors considering both space and time at a fine granularity. We focus on the modeling of microblog posts like Twitter tweets, where the textual content is short, but where associated information in the form of timestamps, geo-locations, and user interactions is available. The model aims to have applications in location inference, link prediction, online social profiling, etc. We report on experiments with tweets that offer insight into the design properties of the papers proposal.", "title": "" }, { "docid": "d337553027aa2d7464a5631a9b99c421", "text": "This paper presents a real-time vision framework that detects and tracks vehicles from stationary camera. It can be used to calculate statistical information such as average traffic speed and flow as well as in surveillance tasks. The framework consists of three main stages. Vehicles are first detected using Haar-like features. In the second phase, an adaptive appearance-based model is built to dynamically keep track of the detected vehicles. This model is also used in the third phase of data association to fuse the detection and tracking results. The use of detection results to update the tracker enhances the overall framework accuracy. The practical value of the proposed framework is demonstrated in real-life experiments where it is used to robustly compute vehicle counts within certain region of interest under variety of challenges.", "title": "" }, { "docid": "d88523afba42431989f5d3bd22f2ad85", "text": "The visual cues from multiple support regions of different sizes and resolutions are complementary in classifying a candidate box in object detection. How to effectively integrate local and contextual visual cues from these regions has become a fundamental problem in object detection. Most existing works simply concatenated features or scores obtained from support regions. In this paper, we proposal a novel gated bi-directional CNN (GBD-Net) to pass messages between features from different support regions during both feature learning and feature extraction. Such message passing can be implemented through convolution in two directions and can be conducted in various layers. Therefore, local and contextual visual patterns can validate the existence of each other by learning their nonlinear relationships and their close iterations are modeled in a much more complex way. It is also shown that message passing is not always helpful depending on individual samples. Gated functions are further introduced to control message transmission and their on-and-off is controlled by extra visual evidence from the input sample. GBD-Net is implemented under the Fast RCNN detection framework. Its effectiveness is shown through experiments on three object detection datasets, ImageNet, Pascal VOC2007 and Microsoft COCO.", "title": "" }, { "docid": "b747e174a4381565c83ee595a2d76d20", "text": "With the advent of improved speech recognition and information retrieval systems, more and more users are increasingly relying on digital assistants to solve their information needs. Intelligent digital assistants on mobile devices and computers, such as Windows Cortana and Apple Siri, provide users with more functionalities than was possible in the traditional web search paradigm. While most user interaction studies have focused on the traditional web search seŠing; in this work, we instead consider user interactions with digital assistants (e.g. Cortana, Siri) and aim at identifying the di‚erences in user interactions, session characteristics and use cases. To our knowledge, this is one of the €rst studies investigating the di‚erent use cases of user interactions with a desktop based digital assistant. Our analysis reveals that given the conversational nature of user interactions, longer sessions (i.e. sessions with a large number of queries) are more common than they were in the traditional web search paradigm. Exploring the di‚erent use cases, we observe that users go beyond general search and use a digital assistant to issue commands, seek instant answers and €nd local information. Our analysis could inform the design of future support systems capable of proactively understanding user needs and developing enhanced evaluation techniques for developing appropriate metrics for the evaluation of digital assistants.", "title": "" }, { "docid": "d3fda01f7dd320a230077804d351b2cc", "text": "Many researchers have proposed programming languages that support incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all dependencies will be recomputed, even if an observer no longer requires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered. To address these problems, we present λ ic , a core calculus that applies a demand-driven semantics to incremental computation, tracking changes in a hierarchical fashion in a novel demanded computation graph. λ ic also formalizes an explicit separation between inner, incremental computations and outer observers. This combination ensures λ ic programs only recompute computations as demanded by observers, and allows inner computations to be reused more liberally. We present ADAPTON, an OCaml library implementing λ ic . We evaluated ADAPTON on a range of benchmarks, and found that it provides reliable speedups, and in many cases dramatically outperforms state-of-the-art IC approaches.", "title": "" }, { "docid": "85d4ac147a4517092b9f81f89af8b875", "text": "This article is an update of an article five of us published in 1992. The areas of Multiple Criteria Decision Making (MCDM) and Multiattribute Utility Theory (MAUT) continue to be active areas of management science research and application. This paper extends the history of these areas and discusses topics we believe to be important for the future of these fields. as well as two anonymous reviewers for valuable comments.", "title": "" }, { "docid": "0f10aa71d58858ea1d8d7571a7cbfe22", "text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.", "title": "" }, { "docid": "1e7db897ead58568def5066f86922081", "text": "This paper addresses the dynamic difficulty adjustment on MOBA games as a way to improve the players entertainment. Although MOBA is currently one of the most played genres around the world, it is known as a game that offer less autonomy, more challenges and consequently more frustration. Due to these characteristics, the use of a mechanism that performs the difficulty balance dynamically seems to be an interesting alternative to minimize and/or avoid that players experience such frustrations. In this sense, this paper presents a dynamic difficulty adjustment mechanism for MOBA games. The main idea is to create a computer controlled opponent that adapts dynamically to the player performance, trying to offer to the player a better game experience. This is done by evaluating the performance of the player using a metric based on some game features and switching the difficulty of the opponent’s artificial intelligence behavior accordingly. Quantitative and qualitative experiments were performed and the results showed that the system is capable of adapting dynamically to the opponent’s skills. In spite of that, the qualitative experiments with users showed that the player’s expertise has a greater influence on the perception of the difficulty level and dynamic adaptation.", "title": "" }, { "docid": "373830558905e8559592c6173366c367", "text": "In this work, we present a depth-based solution to multi-level menus for selection and manipulation of virtual objects using freehand gestures. Navigation between and through menus is performed using three gesture states that utilize X, Y translations of the finger with boundary crossing. Although presented in a single context, this menu structure can be applied to a myriad of domains requiring several levels of menu data, and serves to supplement existing and emerging menu design for augmented, virtual, and mixed-reality applications.", "title": "" }, { "docid": "0305bac1e39203b49b794559bfe0b376", "text": "The emerging field of semantic web technologies promises new stimulus for Software Engineering research. However, since the underlying concepts of the semantic web have a long tradition in the knowledge engineering field, it is sometimes hard for software engineers to overlook the variety of ontology-enabled approaches to Software Engineering. In this paper we therefore present some examples of ontology applications throughout the Software Engineering lifecycle. We discuss the advantages of ontologies in each case and provide a framework for classifying the usage of ontologies in Software Engineering.", "title": "" }, { "docid": "ad2a1afc5602057d76caa34abc92feba", "text": "We have developed a proprietary package that is fully compatible with variously sized chips. In this paper, we present design and development of a Quad Flat No-Lead (QFN) package. We will show how we have built and characterized low-loss packages using standard Printed Circuit Board (PCB) laminate materials. In particular, this package has been developed using Liquid Crystal Polymer (LCP). These packages are unique in that they fully account for and incorporate solder joint and ball bond wire parasitic effects into design. The package has a large cavity section that allow for a variety of chips and decoupling capacitors to be quickly and easily packaged. Insertion loss through a single package transition is measured to be less than 0.4 dB across DC to 40 GHz. Return losses are measured to be better than 15 dB up through 40 GHz. Further, a bare die low noise amplifier (LNA) is packaged using this technology and measured after being surface mounted onto PCB. The packaged LNA is measured to show 19 dB gain over 32 GHz to 44 GHz. Return loss for both bare die and packaged version show no difference, and both measure 15 dB. The LCP package LNA exhibits 4.5 dB noise figure over 37 GHz to 40 GHz. Keywords-Hybrid integrated circuit packaging, liquid crystal polymer, and microwave devices", "title": "" }, { "docid": "bc6c7fcd98160c48cd3b72abff8fad02", "text": "A new concept of formality of linguistic expressions is introduced and argued to be the most important dimension of variation between styles or registers. Formality is subdivided into \"deep\" formality and \"surface\" formality. Deep formality is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. This is achieved by explicit and precise description of the elements of the context needed to disambiguate the expression. A formal style is characterized by detachment, accuracy, rigidity and heaviness; an informal style is more flexible, direct, implicit, and involved, but less informative. An empirical measure of formality, the F-score, is proposed, based on the frequencies of different word classes in the corpus. Nouns, adjectives, articles and prepositions are more frequent in formal styles; pronouns, adverbs, verbs and interjections are more frequent in informal styles. It is shown that this measure, though coarse-grained, adequately distinguishes more from less formal genres of language production, for some available corpora in Dutch, French, Italian, and English. A factor similar to the F-score automatically emerges as the most important one from factor analyses applied to extensive data in 7 different languages. Different situational and personality factors are examined which determine the degree of formality in linguistic expression. It is proposed that formality becomes larger when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated. Some empirical evidence and a preliminary theoretical explanation for these propositions is discussed. Short Abstract: The concept of \"deep\" formality is proposed as the most important dimension of variation between language registers or styles. It is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. An empirical measure, the F-score, is proposed, based on the frequencies of different word classes. This measure adequately distinguishes different genres of language production using data for Dutch, French, Italian, and English. Factor analyses applied to data in 7 different languages produce a similar factor as the most important one. Both the data and the theoretical model suggest that formality increases when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated.", "title": "" }, { "docid": "87878562478c3188b3f0e3e1b99e08b8", "text": "This paper introduces a simple method to improve the radiation pattern of the low profile magneto-electric (ME) dipole antenna by adding a substrate integrated waveguide (SIW) side-walls structure around. Compared with the original ME dipole antenna, gain enhancement of about 3dB on average is achieved without deteriorating the impedance bandwidth. The antenna operates at 15GHz with 63.3% -10dB impedance bandwidth from 10.8GHz to 18.4GHz and the gain is 12.3dBi at 17GHz on a substrate with fixed thickness of 3mm (0.15λ0) and aperture of 35mm×35mm (1.75λ0). This antenna is a good choice in the wireless communication application for its advantages of low-profile, wide bandwidth, high gain and low cost fabrication.", "title": "" }, { "docid": "2923e6f0760006b6a049a5afa297ca56", "text": "Six years ago in this journal we discussed the work of Arthur T. Murray, who endeavored to explore artificial intelligence using the Forth programming language [1]. His creation, which he called MIND.FORTH, was interesting in its ability to understand English sentences in the form: subject-verb-object. It also had the capacity to learn new things and to form mental associations between recent experiences and older memories. In the intervening years, Mr. Murray has continued to develop his MIND.FORTH: he has translated it into Visual BASIC, PERL and Javascript, he has written a book [2] on the subject, and he maintains a wiki web site where anyone may suggest changes or extensions to his design [3]. MIND.FORTH is necessarily complex and opaque by virtue of its functionality; therefore it may be challenging for a newcomer to grasp. However, the more dedicated student will find much of value in this code. Murray himself has become quite a controversial figure.", "title": "" } ]
scidocsrr
bb84e11ba1397aedffb05bc35b84492f
Comparaison de performance entre OpenStack et OpenNebula et les architectures multi-Cloud : Application à la cosmologie .
[ { "docid": "8985195102d4fd33f3b3b5e70f8dafd6", "text": "Cloud management platforms may manage the resources provided by the infrastructure as a service (IaaS) cloud. With the rapid development of open-source cloud platforms, they have been widely used due to open and free, some of them can substitute commercial clouds. Some existed related works only concisely compare the basic features of open-source platforms, and not including some new released features. In this paper, we firstly present the function of OpenStack and OpenNebula briefly, and then compare them from provenance, architecture, hypervisors, security and other angles in detail. Moreover, we provide some deployment recommendations according to different user demands and platform characteristics.", "title": "" } ]
[ { "docid": "81748f85693f48a2a454d097b9885eb3", "text": "The paper analyses the severity of gridlocks in interbank payment systems operating on a real time basis and evaluates by means of simulations the merits of a gridlock resolution algorithm. Data used in the simulations consist of actual payments settled in the Danish and Finnish RTGS systems. The algorithm is found to be applicable to a real time environment and effective in reducing queuing in the systems at all levels of liquidity, but in particular when intra-day liquidity is scarce.", "title": "" }, { "docid": "8295573eb8533e560fb8d14163191745", "text": "Line drawings play an important role in shape description due to they can convey meaningful information by outstanding the key component and distracting details or ignoring less important. Suggestive contours are a type of lines to produce high quality line drawings. To generate those contours, we can generally start from two aspects: from image space or object space. The image space strategies can not only extract suggestive contours much faster than object space methods, but also don't require the information of the 3D objects. However they are sensitive to small noise, which is ubiquitous in the digital image. In this paper, before extracting lines, we apply an accelerated structure-preserving local Laplacian filter to smooth the shaded image. Through our experiments, we draw the conclusion that our method can effectively suppress the redundant details, generating a cleaner, higher quality line drawing by image space methods, and can compare with the result by object space ones.", "title": "" }, { "docid": "dc88732e98297d90ec97d3fa85503769", "text": "Ferrite and iron powder magnetic materials were developed to support a wide range of components, including inductors, EMI suppressors, conventional transformers and transmission line transformers (TLTs). This article deals with transmission line transformers, presenting the observations and conclusions of the author, reached after extensive experimental research into the behavior and performance of these devices in broadband applications.", "title": "" }, { "docid": "d1bdbe5986bc078a4e5fe22d180e71f7", "text": "Progress in electron microscopy-based high-resolution connectomics is limited by data analysis throughput. Here, we present SegEM, a toolset for efficient semi-automated analysis of large-scale fully stained 3D-EM datasets for the reconstruction of neuronal circuits. By combining skeleton reconstructions of neurons with automated volume segmentations, SegEM allows the reconstruction of neuronal circuits at a work hour consumption rate of about 100-fold less than manual analysis and about 10-fold less than existing segmentation tools. SegEM provides a robust classifier selection procedure for finding the best automated image classifier for different types of nerve tissue. We applied these methods to a volume of 44 × 60 × 141 μm(3) SBEM data from mouse retina and a volume of 93 × 60 × 93 μm(3) from mouse cortex, and performed exemplary synaptic circuit reconstruction. SegEM resolves the tradeoff between synapse detection and semi-automated reconstruction performance in high-resolution connectomics and makes efficient circuit reconstruction in fully-stained EM datasets a ready-to-use technique for neuroscience.", "title": "" }, { "docid": "7057a9c1cedafe1fca48b886afac20d3", "text": "In this paper, we develop an approach to exploiting kernel methods with manifold-valued data. In many computer vision problems, the data can be naturally represented as points on a Riemannian manifold. Due to the non-Euclidean geometry of Riemannian manifolds, usual Euclidean computer vision and machine learning algorithms yield inferior results on such data. In this paper, we define Gaussian radial basis function (RBF)-based positive definite kernels on manifolds that permit us to embed a given manifold with a corresponding metric in a high dimensional reproducing kernel Hilbert space. These kernels make it possible to utilize algorithms developed for linear spaces on nonlinear manifold-valued data. Since the Gaussian RBF defined with any given metric is not always positive definite, we present a unified framework for analyzing the positive definiteness of the Gaussian RBF on a generic metric space. We then use the proposed framework to identify positive definite kernels on two specific manifolds commonly encountered in computer vision: the Riemannian manifold of symmetric positive definite matrices and the Grassmann manifold, i.e., the Riemannian manifold of linear subspaces of a Euclidean space. We show that many popular algorithms designed for Euclidean spaces, such as support vector machines, discriminant analysis and principal component analysis can be generalized to Riemannian manifolds with the help of such positive definite Gaussian kernels.", "title": "" }, { "docid": "090f6460180573922dc86866033124c6", "text": "In a dc distribution system, where multiple power sources supply a common bus, current sharing is an important issue. When renewable energy resources are considered, such as photovoltaic (PV), dc/dc converters are needed to decouple the source voltage, which can vary due to operating conditions and maximum power point tracking (MPPT), from the dc bus voltage. Since different sources may have different power delivery capacities that may vary with time, coordination of the interface to the bus is of paramount importance to ensure reliable system operation. Further, since these sources are most likely distributed throughout the system, distributed controls are needed to ensure a robust and fault tolerant control system. This paper presents a model predictive control-based MPPT and model predictive control-based droop current regulator to interface PV in smart dc distribution systems. Back-to-back dc/dc converters control both the input current from the PV module and the droop characteristic of the output current injected into the distribution bus. The predictive controller speeds up both of the control loops, since it predicts and corrects error before the switching signal is applied to the respective converter.", "title": "" }, { "docid": "eb52b00d6aec954e3c64f7043427709c", "text": "The paper presents a ball on plate balancing system useful for various educational purposes. A touch-screen placed on the plate is used for ball's position sensing and two servomotors are employed for balancing the plate in order to control ball's Cartesian coordinates. The design of control embedded systems is demonstrated for different control algorithms in compliance with FreeRTOS real time operating system and dsPIC33 microcontroller. On-line visualizations useful for system monitoring are provided by a PC host application connected with the embedded application. The measurements acquired during real-time execution and the parameters of the system are stored in specific data files, as support for any desired additional analysis. Taking into account the properties of this controlled system (instability, fast dynamics) and the capabilities of the embedded architecture (diversity of the involved communication protocols, diversity of employed hardware components, usage of an open source real time operating system), this educational setup allows a good illustration of numerous theoretical and practical aspects related to system engineering and applied informatics.", "title": "" }, { "docid": "3c8ac7bd31d133b4d43c0d3a0f08e842", "text": "How we teach and learn is undergoing a revolution, due to changes in technology and connectivity. Education may be one of the best application areas for advanced NLP techniques, and NLP researchers have much to contribute to this problem, especially in the areas of learning to write, mastery learning, and peer learning. In this paper I consider what happens when we convert natural language processors into natural language coaches. 1 Why Should You Care, NLP Researcher? There is a revolution in learning underway. Students are taking Massive Open Online Courses as well as online tutorials and paid online courses. Technology and connectivity makes it possible for students to learn from anywhere in the world, at any time, to fit their schedules. And in today’s knowledge-based economy, going to school only in one’s early years is no longer enough; in future most people are going to need continuous, lifelong education. Students are changing too — they expect to interact with information and technology. Fortunately, pedagogical research shows significant benefits of active learning over passive methods. The modern view of teaching means students work actively in class, talk with peers, and are coached more than graded by their instructors. In this new world of education, there is a great need for NLP research to step in and help. I hope in this paper to excite colleagues about the possibilities and suggest a few new ways of looking at them. I do not attempt to cover the field of language and learning comprehensively, nor do I claim there is no work in the field. In fact there is quite a bit, such as a recent special issue on language learning resources (Sharoff et al., 2014), the long running ACL workshops on Building Educational Applications using NLP (Tetreault et al., 2015), and a recent shared task competition on grammatical error detection for second language learners (Ng et al., 2014). But I hope I am casting a few interesting thoughts in this direction for those colleagues who are not focused on this particular topic.", "title": "" }, { "docid": "2cf7921cce2b3077c59d9e4e2ab13afe", "text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.", "title": "" }, { "docid": "03b044199b9985249f98e4f467561d82", "text": "We argue that the estimation of the mutual information between high dimensional continuous random variables is achievable by gradient descent over neural networks. This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is backpropable and we prove that it is strongly consistent. We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings. We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems. Our results demonstrate substantial added flexibility and improvement in these settings.", "title": "" }, { "docid": "21fd0aae6d0a2902108d53d92749a754", "text": "The Clustered Regularly Interspaced Short Palindromic Repeats associated Cas9/sgRNA system is a novel targeted genome-editing technique derived from bacterial immune system. It is an inexpensive, easy, most user friendly and rapidly adopted genome editing tool transforming to revolutionary paradigm. This technique enables precise genomic modifications in many different organisms and tissues. Cas9 protein is an RNA guided endonuclease utilized for creating targeted double-stranded breaks with only a short RNA sequence to confer recognition of the target in animals and plants. Development of genetically edited (GE) crops similar to those developed by conventional or mutation breeding using this potential technique makes it a promising and extremely versatile tool for providing sustainable productive agriculture for better feeding of rapidly growing population in a changing climate. The emerging areas of research for the genome editing in plants include interrogating gene function, rewiring the regulatory signaling networks and sgRNA library for high-throughput loss-of-function screening. In this review, we have described the broad applicability of the Cas9 nuclease mediated targeted plant genome editing for development of designer crops. The regulatory uncertainty and social acceptance of plant breeding by Cas9 genome editing have also been described. With this powerful and innovative technique the designer GE non-GM plants could further advance climate resilient and sustainable agriculture in the future and maximizing yield by combating abiotic and biotic stresses.", "title": "" }, { "docid": "f3f9b4659912e5234364c198d32d4767", "text": "Estimating the motion of a vehicle is a crucial requirement for intelligent vehicles. In order to solve this problem using a Bayes filter, an appropriate model of vehicular motions is required. This paper systematically reviews typical vehicular motion models and evaluates their suitability in different scenarios. For that, the results of extensive experiments using accurate reference sensors are presented and discussed in order to provide guidelines for the choice of an optimal model.", "title": "" }, { "docid": "c1305b1ccc199126a52c6a2b038e24d1", "text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ec641ace6df07156891f2bf40ea5d072", "text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.", "title": "" }, { "docid": "2fa3e2a710cc124da80941545fbdffa4", "text": "INTRODUCTION\nThe use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear.\n\n\nMETHODS\nWe reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear.\n\n\nRESULTS\nThe intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001).\n\n\nDISCUSSION\nOur findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.", "title": "" }, { "docid": "e01d5be587c73aaa133acb3d8aaed996", "text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.", "title": "" }, { "docid": "3817e2af004e089915bcdb030622606f", "text": "The paper describes a practical model for routing and tracking with mobile vehicle in a large area outdoor environment based on the Global Positioning System (GPS) and Global System for Mobile Communication (GSM). The supporting devices, GPS module-eMD3620 of AT&S company and GSM modem-GM862 of Telit company, are controlled by a 32bits microcontroller LM3S2965 implemented a new version ARM Cortex M3 core. The system is equipped the Compass sensor-YAS529 of Yamaha company and Accelerator sensor- KXSC72050 of Koinix company to determine moving direction of a vehicle. The device will collect positions of the vehicle via GPS receiver and then sends the data of positions to supervised center by the SMS (Short Message Services) or GPRS (General Package Radio Service) service. The supervised center is composed of a development kit that supports GSM techniques-WMP100 of the Wavecom company. After processing data, the position of the mobile vehicle will be displayed on Google Map.", "title": "" }, { "docid": "5af5936ec0d889ab19bd8c6c8e8ebc35", "text": "Development in the wireless communication systems is the evolving field of research in today’s world. The demand of high data rate, low latency at the minimum cost by the user requires many changes in the hardware organization. The use of digital modulation techniques like OFDM assures the reliability of communication in addition to providing flexibility and robustness. Modifications in the hardware structure can be replaced by the change in software only which gives birth to Software Define Radio (SDR): a radio which is more flexible as compared to conventional radio and can perform signal processing at the minimum cost. GNU Radio with the help of Universal Software Peripheral Radio (USRP) provides flexible and the cost effective SDR platform for the purpose of real time video transmission. The results given in this paper are taken from the experiment performed on USRP-1 along with the GNU Radio version 3.2.2.", "title": "" }, { "docid": "4775bf71a5eea05b77cafa53daefcff9", "text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.", "title": "" } ]
scidocsrr
08bf22ef476d49475f9bfe4d097bda2c
A Natural Language Database Interface Based on a Probabilistic Context Free Grammar
[ { "docid": "bfa178f35027a55e8fd35d1c87789808", "text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.", "title": "" } ]
[ { "docid": "e11a1e3ef5093aa77797463b7b8994ea", "text": "Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human–robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.", "title": "" }, { "docid": "b168f298448b3ba16b7f585caae7baa6", "text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record", "title": "" }, { "docid": "9c715e50cf36e14312407ed722fe7a7d", "text": "Usual medical care often fails to meet the needs of chronically ill patients, even in managed, integrated delivery systems. The medical literature suggests strategies to improve outcomes in these patients. Effective interventions tend to fall into one of five areas: the use of evidence-based, planned care; reorganization of practice systems and provider roles; improved patient self-management support; increased access to expertise; and greater availability of clinical information. The challenge is to organize these components into an integrated system of chronic illness care. Whether this can be done most efficiently and effectively in primary care practice rather than requiring specialized systems of care remains unanswered.", "title": "" }, { "docid": "dbd11235f7b6b515f672b06bb10ebc3d", "text": "Until recently job seeking has been a tricky, tedious and time consuming process, because people looking for a new position had to collect information from many different sources. Job recommendation systems have been proposed in order to automate and simplify this task, also increasing its effectiveness. However, current approaches rely on scarce manually collected data that often do not completely reveal people skills. Our work aims to find out relationships between jobs and people skills making use of data from LinkedIn users’ public profiles. Semantic associations arise by applying Latent Semantic Analysis (LSA). We use the mined semantics to obtain a hierarchical clustering of job positions and to build a job recommendation system. The outcome proves the effectiveness of our method in recommending job positions. Anyway, we argue that our approach is definitely general, because the extracted semantics could be worthy not only for job recommendation systems but also for recruiting systems. Furthermore, we point out that both the hierarchical clustering and the recommendation system do not require parameters to be tuned.", "title": "" }, { "docid": "4a9dbf259f14e5874cda6782cb8f981a", "text": "Concept of Safe diagram was introduced 30 years ago (Singh and Schiffer, 1982) for the analysis of the vibration characteristics of packeted bladed disc for steam turbines. A detailed description of Safe diagram for steam turbine blades was presented 25 years ago in the 17 th Turbo Symposium (Singh et. el, 1988). Since that time it has found application in the design and failure analysis of many turbo machineries e.g. steam turbines, centrifugal compressor, axial compressor, expanders etc. The theory was justified using the argument of natural modes of vibration containing single harmonics and alternating forcing represented by pure sine wave around 360 degrees applied to bladed disk. This case is referred as tuned system. It was also explained that packeted bladed disc is a mistuned system where geometrical symmetry is broken deliberately by breaking the shroud in many places. This is a normal practice which provides blade packets design. This is known as deliberate geometrical mistuning. This mistuning gave rise to frequency of certain modes being split in two different modes which otherwise existed in duplicate. Natural modes of this type construction exhibited impurity i.e. it contained many harmonics in place of just one as it occurs in a tuned case. As a result, this phenomenon gives rise to different system response for each split mode. Throughout the years that have passed, Safe diagram has been used for any mistuned systemrandom, known or deliberate. Many co-workers and friends have asked me to write the history of the evolution and of the first application of this concept and its application in more general case. This paper describes application of Safe diagram for general case of tuned system and mistuned system.", "title": "" }, { "docid": "b507a9f5211ed6fa9b9cc954392dbd84", "text": "We introduce EnhanceGAN, an adversarial learning based model that performs automatic image enhancement. Traditional image enhancement frameworks typically involve training models in a fully-supervised manner, which require expensive annotations in the form of aligned image pairs. In contrast to these approaches, our proposed EnhanceGAN only requires weak supervision (binary labels on image aesthetic quality) and is able to learn enhancement operators for the task of aesthetic-based image enhancement. In particular, we show the effectiveness of a piecewise color enhancement module trained with weak supervision, and extend the proposed EnhanceGAN framework to learning a deep filtering-based aesthetic enhancer. The full differentiability of our image enhancement operators enables the training of EnhanceGAN in an end-to-end manner. We further demonstrate the capability of EnhanceGAN in learning aesthetic-based image cropping without any groundtruth cropping pairs. Our weakly-supervised EnhanceGAN reports competitive quantitative results on aesthetic-based color enhancement as well as automatic image cropping, and a user study confirms that our image enhancement results are on par with or even preferred over professional enhancement.", "title": "" }, { "docid": "f8266975b254c4e2c27c8e477062b796", "text": "Unmanned aerial vehicles (UAVs) have enormous potential in the public and civil domains. These are particularly useful in applications, where human lives would otherwise be endangered. Multi-UAV systems can collaboratively complete missions more efficiently and economically as compared to single UAV systems. However, there are many issues to be resolved before effective use of UAVs can be made to provide stable and reliable context-specific networks. Much of the work carried out in the areas of mobile ad hoc networks (MANETs), and vehicular ad hoc networks (VANETs) does not address the unique characteristics of the UAV networks. UAV networks may vary from slow dynamic to dynamic and have intermittent links and fluid topology. While it is believed that ad hoc mesh network would be most suitable for UAV networks yet the architecture of multi-UAV networks has been an understudied area. Software defined networking (SDN) could facilitate flexible deployment and management of new services and help reduce cost, increase security and availability in networks. Routing demands of UAV networks go beyond the needs of MANETS and VANETS. Protocols are required that would adapt to high mobility, dynamic topology, intermittent links, power constraints, and changing link quality. UAVs may fail and the network may get partitioned making delay and disruption tolerance an important design consideration. Limited life of the node and dynamicity of the network lead to the requirement of seamless handovers, where researchers are looking at the work done in the areas of MANETs and VANETs, but the jury is still out. As energy supply on UAVs is limited, protocols in various layers should contribute toward greening of the network. This paper surveys the work done toward all of these outstanding issues, relating to this new class of networks, so as to spur further research in these areas.", "title": "" }, { "docid": "a1444497114eadc1c90c1cfb85852641", "text": "For several years it has been argued that neural synchronisation is crucial for cognition. The idea that synchronised temporal patterns between different neural groups carries information above and beyond the isolated activity of these groups has inspired a shift in focus in the field of functional neuroimaging. Specifically, investigation into the activation elicited within certain regions by some stimulus or task has, in part, given way to analysis of patterns of co-activation or functional connectivity between distal regions. Recently, the functional connectivity community has been looking beyond the assumptions of stationarity that earlier work was based on, and has introduced methods to incorporate temporal dynamics into the analysis of connectivity. In particular, non-invasive electrophysiological data (magnetoencephalography/electroencephalography (MEG/EEG)), which provides direct measurement of whole-brain activity and rich temporal information, offers an exceptional window into such (potentially fast) brain dynamics. In this review, we discuss challenges, solutions, and a collection of analysis tools that have been developed in recent years to facilitate the investigation of dynamic functional connectivity using these imaging modalities. Further, we discuss the applications of these approaches in the study of cognition and neuropsychiatric disorders. Finally, we review some existing developments that, by using realistic computational models, pursue a deeper understanding of the underlying causes of non-stationary connectivity.", "title": "" }, { "docid": "26e79793addc4750dcacc0408764d1e1", "text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.", "title": "" }, { "docid": "14cb0e8fc4e8f82dc4e45d8562ca4bb2", "text": "Information security is one of the most important factors to be considered when secret information has to be communicated between two parties. Cryptography and steganography are the two techniques used for this purpose. Cryptography scrambles the information, but it reveals the existence of the information. Steganography hides the actual existence of the information so that anyone else other than the sender and the recipient cannot recognize the transmission. In steganography the secret information to be communicated is hidden in some other carrier in such a way that the secret information is invisible. In this paper an image steganography technique is proposed to hide audio signal in image in the transform domain using wavelet transform. The audio signal in any format (MP3 or WAV or any other type) is encrypted and carried by the image without revealing the existence to anybody. When the secret information is hidden in the carrier the result is the stego signal. In this work, the results show good quality stego signal and the stego signal is analyzed for different attacks. It is found that the technique is robust and it can withstand the attacks. The quality of the stego image is measured by Peak Signal to Noise Ratio (PSNR), Structural Similarity Index Metric (SSIM), Universal Image Quality Index (UIQI). The quality of extracted secret audio signal is measured by Signal to Noise Ratio (SNR), Squared Pearson Correlation Coefficient (SPCC). The results show good values for these metrics. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Graph Algorithms, High Performance Implementations and Applications (ICGHIA2014).", "title": "" }, { "docid": "1b4d292a618befaa44cd8214abe46038", "text": "The obsessive-compulsive spectrum is an important concept referring to a number of disorders drawn from several diagnostic categories that share core obsessive-compulsive features. These disorders can be grouped by the focus of their symptoms: bodily preoccupation, impulse control, or neurological disorders. Although the disorders are clearly distinct from one another, they have intriguing similarities in phenomenology, etiology, pathophysiology, patient characteristics, and treatment response. In combination with the knowledge gained through many years of research on obsessive-compulsive disorder (OCD), the concept of a spectrum has generated much fruitful research on the spectrum disorders. It has become apparent that these disorders can also be viewed as being on a continuum of compulsivity to impulsivity, characterized by harm avoidance at the compulsive end and risk seeking at the impulsive end. The compulsive and impulsive disorders differ in systematic ways that are just beginning to be understood. Here, we review these concepts and several representative obsessive-compulsive spectrum disorders including both compulsive and impulsive disorders, as well as the three different symptom clusters: OCD, body dysmorphic disorder, pathological gambling, sexual compulsivity, and autism spectrum disorders.", "title": "" }, { "docid": "a33e8a616955971014ceea9da1e8fcbe", "text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.", "title": "" }, { "docid": "98f551a5af7efe8537b63d482e26c907", "text": "This research is aimed at identifying the determinants that influence higher educational students’ behavioral intention to utilize elearning systems. The study, therefore, proposed an extension of Unified Theory of Acceptance and use of Technology (UTAUT) model by integrating it with four other variables. Data collected from 264 higher educational students using e-learning systems in Ghana through survey questionnaire were used to test the proposed research model. The study indicated that six variables, Performance expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating Factor (FF), personal innovativeness (PI) and Study Modes (SM) had significant impact on students’ behavioral intention on e-learning system. The empirical outcome reflects both theoretical and practical consideration in promoting e-learning systems in higher education in Ghana.", "title": "" }, { "docid": "0b19bd9604fae55455799c39595c8016", "text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.", "title": "" }, { "docid": "6a71031c810791b93bc06116b75c2c15", "text": "With the popularity of social media platforms such as Facebook and Twitter, the amount of useful data in these sources is rapidly increasing, making them promising places for information acquisition. This research aims at the customized organization of a social media corpus using focused topic hierarchy. It organizes the contents into different structures to meet with users' different information needs (e.g., \"iPhone 5 problem\" or \"iPhone 5 camera\"). To this end, we introduce a novel function to measure the likelihood of a topic hierarchy, by which the users' information need can be incorporated into the process of topic hierarchy construction. Using the structure information within the generated topic hierarchy, we then develop a probability based model to identify the representative contents for topics to assist users in document retrieval on the hierarchy. Experimental results on real world data illustrate the effectiveness of our method and its superiority over state-of-the-art methods for both information organization and retrieval tasks.", "title": "" }, { "docid": "231d06a13cfdf244f6e2c55861d272fb", "text": "Despite the recent advances in search quality, the fast increase in the size of the Web collection has introduced new challenges for Web ranking algorithms. In fact, there are still many situations in which the users are presented with imprecise or very poor results. One of the key difficulties is the fact that users usually submit very short and ambiguous queries, and they do not fully specify their information needs. That is, it is necessary to improve the query formation process if better answers are to be provided. In this work we propose a novel concept-based query expansion technique, which allows disambiguating queries submitted to search engines. The concepts are extracted by analyzing and locating cycles in a special type of query relations graph. This is a directed graph built from query relations mined using association rules. The concepts related to the current query are then shown to the user who selects the one concept that he interprets is most related to his query. This concept is used to expand the original query and the expanded query is processed instead. Using a Web test collection, we show that our approach leads to gains in average precision figures of roughly 32%. Further, if the user also provides information on the type of relation between his query and the selected concept, the gains in average precision go up to roughly 52%.", "title": "" }, { "docid": "3fd14fcfe8240456bc38d5492c3510a4", "text": "This paper presents a study on adjacent channel interference in millimeter-wave small cell systems based on IEEE 802.11ad/WiGig. It includes hardware prototype development, interference measurements, and performance evaluation of an interference suppression technique. The access point prototype employs three RF modules with 120° beam steering capability, thus enabling 360° coverage. Using the developed prototype, interference measurements were performed and the packet error degradation due to adjacent channel interference was observed. To mitigate the performance degradation, an interference suppression technique using a two stream receiver architecture was applied. The subsequent measurements showed improvement in EVM and also expansion of the cell's coverage area, demonstrating the effectiveness of the applied technique for small cell systems using IEEE 802.11ad/WiGig.", "title": "" }, { "docid": "01ba4d36dd05cb533e5ff1ea462888d6", "text": "Against a backdrop of serious corporate and mutual fund scandals, governmental bodies, institutional and private investors have demanded more effective corporate governance structures procedures and systems. The compliance function is now an integral part of corporate policy and practice. This paper presents the findings from a longitudinal qualitative research study on the introduction of an IT-based investment management system at four client sites. Using institutional theory to analyze our data, we find the process of institutionalization follows a non-linear pathway where regulative, normative and cultural forces within the investment management industry produce conflicting organizational behaviours and outcomes.", "title": "" }, { "docid": "fbd390ed58529fc5dc552d7550168546", "text": "Recently, tuple-stores have become pivotal structures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.", "title": "" }, { "docid": "e54f649fced7c82b643b9ada2dca6187", "text": "Some 3D computer vision techniques such as structure from motion (SFM) and augmented reality (AR) depend on a specific perspective-n-point (PnP) algorithm to estimate the absolute camera pose. However, existing PnP algorithms are difficult to achieve a good balance between accuracy and efficiency, and most of them do not make full use of the internal camera information such as focal length. In order to attack these drawbacks, we propose a fast and robust PnP (FRPnP) method to calculate the absolute camera pose for 3D compute vision. In the proposed FRPnP method, we firstly formulate the PnP problem as the optimization problem in the null space that can avoid the effects of the depth of each 3D point. Secondly, we can easily get the solution by the direct manner using singular value decomposition. Finally, the accurate information of camera pose can be obtained by optimization strategy. We explore four ways to evaluate the proposed FRPnP algorithm with synthetic dataset, real images, and apply it in the AR and SFM system. Experimental results show that the proposed FRPnP method can obtain the best balance between computational cost and precision, and clearly outperforms the state-of-the-art PnP methods.", "title": "" } ]
scidocsrr
110aaf871efbe77f508b98841e43ac00
A deep learning framework for character motion synthesis and editing
[ { "docid": "2757a1e3e1c9169716a9876494debf13", "text": "We present a technique for learning a manifold of human motion data using Convolutional Autoencoders. Our approach is capable of learning a manifold on the complete CMU database of human motion. This manifold can be treated as a prior probability distribution over human motion data, which has many applications in animation research, including projecting invalid or corrupt motion onto the manifold for removing error, computing similarity between motions using geodesic distance along the manifold, and interpolation of motion along the manifold for avoiding blending artefacts.", "title": "" } ]
[ { "docid": "4ee51768115c2079d7a4348af18be3ae", "text": "This paper presents a 15kV silicon carbide (SiC) MOSFET gate drive, which features high common-mode (CM) noise immunity, small size, light weight, and robust yet flexible protection functions. To enhance the gate-drive power reliability, a power over fiberbased isolated power supply is designed to replace the traditional design based on isolation transformer. It delivers the gate-drive power by laser light via optical fiber over a long distance (>1 m), so a high isolation voltage (>20 kV) is achieved, and the circuit size and weight are reduced. More importantly, it eliminates the parasitic CM capacitance coupling the power stage and control stage, and thus eradicates the control signal distortion caused by high dv/dt in switching transients of the high-voltage SiC devices. In addition, the gate-drive circuit design integrates comprehensive protection functions, including the overcurrent protection, undervoltage/overvoltage lockout, active miller clamping, soft turn off, and fault report. The overcurrent protection responds within 400 ns. The experimental results from a 15kV double-pulse tester are presented to validate the design.", "title": "" }, { "docid": "b0e81e112b9aa7ebf653243f00b21f23", "text": "Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.", "title": "" }, { "docid": "613f9f4be194c012593cb7fe2bf37471", "text": "Thalamus.The human thalamus is a nuclear complex located in the diencephalon and comprising of four parts (the hypothalamus, the epythalamus, the ventral thalamus, and the dorsal thalamus). The thalamus is a relay centre subserving both sensory and motor mechanisms. Thalamic nuclei (50–60 nuclei) project to one or a few well-defined cortical areas. Multiple cortical areas receive afferents from a single thalamic nucleus and send back information to different thalamic nuclei. The corticofugal projection provides positive feedback to the \"correct\" input, while at the same time suppressing irrelevant information. Topographical organisation of the thalamic afferents and efferents is contralateral, and the lateralisation of the thalamic functions affects both sensory and motoric aspects. Symptoms of lesions located in the thalamus are closely related to the function of the areas involved. An infarction or haemorrhage thalamic lesion can develop somatosensory disturbances and/or central pain in the opposite hemibody, analgesic or purely algesic thalamic syndrome characterised by contralateral anaesthesia (or hypaesthesia), contralateral weakness, ataxia and, often, persistent spontaneous pain. Basal ganglia.Basal ganglia form a major centre in the complex extrapyramidal motor system, as opposed to the pyramidal motor system (corticobulbar and corticospinal pathways). Basal ganglia are involved in many neuronal pathways having emotional, motivational, associative and cognitive functions as well. The striatum (caudate nucleus, putamen and nucleus accumbens) receive inputs from all cortical areas and, throughout the thalamus, project principally to frontal lobe areas (prefrontal, premotor and supplementary motor areas) which are concerned with motor planning. These circuits: (i) have an important regulatory influence on cortex, providing information for both automatic and voluntary motor responses to the pyramidal system; (ii) play a role in predicting future events, reinforcing wanted behaviour and suppressing unwanted behaviour, and (iii) are involved in shifting attentional sets and in both high-order processes of movement initiation and spatial working memory. Basal ganglia-thalamo-cortical circuits maintain somatotopic organisation of movement-related neurons throughout the circuit. These circuits reveal functional subdivisions of the oculomotor, prefrontal and cingulate circuits, which play an important role in attention, learning and potentiating behaviour-guiding rules. Involvement of the basal ganglia is related to involuntary and stereotyped movements or paucity of movements without involvement of voluntary motor functions, as in Parkinson’s disease, Wilson’s disease, progressive supranuclear palsy or Huntington’s disease. The symptoms differ with the location of the lesion. The commonest disturbances in basal ganglia lesions are abulia (apathy with loss of initiative and of spontaneous thought and emotional responses) and dystonia, which become manifest as behavioural and motor disturbances, respectively.", "title": "" }, { "docid": "018d05daa52fb79c17519f29f31026d7", "text": "The aim of this paper is to review conceptual and empirical literature on the concept of distributed leadership (DL) in order to identify its origins, key arguments and areas for further work. Consideration is given to the similarities and differences between DL and related concepts, including ‘shared’, ‘collective’, ‘collaborative’, ‘emergent’, ‘co-’ and ‘democratic’ leadership. Findings indicate that, while there are some common theoretical bases, the relative usage of these concepts varies over time, between countries and between sectors. In particular, DL is a notion that has seen a rapid growth in interest since the year 2000, but research remains largely restricted to the field of school education and of proportionally more interest to UK than US-based academics. Several scholars are increasingly going to great lengths to indicate that, in order to be ‘distributed’, leadership need not necessarily be widely ‘shared’ or ‘democratic’ and, in order to be effective, there is a need to balance different ‘hybrid configurations’ of practice. The paper highlights a number of areas for further attention, including three factors relating to the context of much work on DL (power and influence; organizational boundaries and context; and ethics and diversity), and three methodological and developmental challenges (ontology; research methods; and leadership development, reward and recognition). It is concluded that descriptive and normative perspectives which dominate the literature should be supplemented by more critical accounts which recognize the rhetorical and discursive significance of DL in (re)constructing leader– follower identities, mobilizing collective engagement and challenging or reinforcing traditional forms of organization.", "title": "" }, { "docid": "6888b5311d7246c5eb18142d2746ec68", "text": "Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.", "title": "" }, { "docid": "3150741173abdb725a4d35ded866b2e3", "text": "BACKGROUND AND PURPOSE\nAcute-onset dysphagia after stroke is frequently associated with an increased risk of aspiration pneumonia. Because most screening tools are complex and biased toward fluid swallowing, we developed a simple, stepwise bedside screen that allows a graded rating with separate evaluations for nonfluid and fluid nutrition starting with nonfluid textures. The Gugging Swallowing Screen (GUSS) aims at reducing the risk of aspiration during the test to a minimum; it assesses the severity of aspiration risk and recommends a special diet accordingly.\n\n\nMETHODS\nFifty acute-stroke patients were assessed prospectively. The validity of the GUSS was established by fiberoptic endoscopic evaluation of swallowing. For interrater reliability, 2 independent therapists evaluated 20 patients within a 2-hour period. For external validity, another group of 30 patients was tested by stroke nurses. For content validity, the liquid score of the fiberoptic endoscopic evaluation of swallowing was compared with the semisolid score.\n\n\nRESULTS\nInterrater reliability yielded excellent agreement between both raters (kappa=0.835, P<0.001). In both groups, GUSS predicted aspiration risk well (area under the curve=0.77; 95% CI, 0.53 to 1.02 in the 20-patient sample; area under the curve=0.933; 95% CI, 0.833 to 1.033 in the 30-patient sample). The cutoff value of 14 points resulted in 100% sensitivity, 50% specificity, and a negative predictive value of 100% in the 20-patient sample and of 100%, 69%, and 100%, respectively, in the 30-patient sample. Content validity showed a significantly higher aspiration risk with liquids compared with semisolid textures (P=0.001), therefore confirming the subtest sequence of GUSS.\n\n\nCONCLUSIONS\nThe GUSS offers a quick and reliable method to identify stroke patients with dysphagia and aspiration risk. Such a graded assessment considers the pathophysiology of voluntary swallowing in a more differentiated fashion and provides less discomfort for those patients who can continue with their oral feeding routine for semisolid food while refraining from drinking fluids.", "title": "" }, { "docid": "ad8aacb65cef9abe3e232d4bec484dca", "text": "The advent of emerging technologies such as Web services, service-oriented architecture, and cloud computing has enabled us to perform business services more efficiently and effectively. However, we still suffer from unintended security leakages by unauthorized actions in business services while providing more convenient services to Internet users through such a cutting-edge technological growth. Furthermore, designing and managing Web access control policies are often error-prone due to the lack of effective analysis mechanisms and tools. In this paper, we represent an innovative policy anomaly analysis approach for Web access control policies. We focus on XACML (eXtensible Access Control Markup Language) policy since XACML has become the de facto standard for specifying and enforcing access control policies for various Web-based applications and services. We introduce a policy-based segmentation technique to accurately identify policy anomalies and derive effective anomaly resolutions. We also discuss a proof-of-concept implementation of our method called XAnalyzer and demonstrate how efficiently our approach can discover and resolve policy anomalies.", "title": "" }, { "docid": "2f01e912a6fbafca1e791ef18fb51ceb", "text": "Visualizing the result of users' opinion mining on twitter using social network graph can play a crucial role in decision-making. Available data visualizing tools, such as NodeXL, use a specific file format as an input to construct and visualize the social network graph. One of the main components of the input file is the sentimental score of the users' opinion. This motivates us to develop a free and open source system that can take the opinion of users in raw text format and produce easy-to-interpret visualization of opinion mining and sentiment analysis result on a social network. We use a public machine learning library called LingPipe Library to classify the sentiments of users' opinion into positive, negative and neutral classes. Our proposed system can be used to analyze and visualize users' opinion on the network level to determine sub-social structures (sub-groups). Moreover, the proposed system can also identify influential people in the social network by using node level metrics such as betweenness centrality. In addition to the network level and node level analysis, our proposed method also provides an efficient filtering mechanism by either time and date, or the sentiment score. We tested our proposed system using user opinions about different Samsung products and related issues that are collected from five official twitter accounts of Samsung Company. The test results show that our proposed system will be helpful to analyze and visualize the opinion of users at both network level and node level.", "title": "" }, { "docid": "c4027028f59192add0d14d21d99eb759", "text": "Individual differences in mind wandering and reading comprehension were examined in the current study. In particular, individual differences in mind wandering, working memory capacity, interest in the current topic, motivation to do well on the task, and topic experience and their relations with reading comprehension were examined in the current study. Using confirmatory factor analysis and structural equation modeling it was found that variation in mind wandering while reading was influenced by working memory capacity, topic interest, and motivation. Furthermore, these same factors, along with topic experience, influenced individual differences in reading comprehension. Importantly, several factors had direct effects on reading comprehension (and mind wandering), while the relation between reading comprehension (and mind wandering) and other factors occurred via indirect effects. These results suggest that both domain-general and domain-specific factors contribute to mind wandering while reading and to reading comprehension.", "title": "" }, { "docid": "887c8924466bae888efa5c7c4cbef594", "text": "UNLABELLED\nThe importance of movement is often overlooked because it is such a natural part of human life. It is, however, crucial for a child's physical, cognitive and social development. In addition, experiences support learning and development of fundamental movement skills. The foundations of those skills are laid in early childhood and essential to encourage a physically active lifestyle. Fundamental movement skill performance can be examined with several assessment tools. The choice of a test will depend on the context in which the assessment is planned. This article compares seven assessment tools which are often referred to in European or international context. It discusses the tools' usefulness for the assessment of movement skill development in general population samples. After a brief description of each assessment tool the article focuses on contents, reliability, validity and normative data. A conclusion outline of strengths and weaknesses of all reviewed assessment tools focusing on their use in educational research settings is provided and stresses the importance of regular data collection of fundamental movement skill development among preschool children. Key pointsThis review discusses seven movement skill assessment tool's test content, reliability, validity and normative samples.The seven assessment tools all showed to be of great value. Strengths and weaknesses indicate that test choice will depend on specific purpose of test use.Further data collection should also include larger data samples of able bodied preschool children.Admitting PE specialists in assessment of fundamental movement skill performance among preschool children is recommended.The assessment tool's normative data samples would benefit from frequent movement skill performance follow-up of today's children.\n\n\nABBREVIATIONS\nMOT 4-6: Motoriktest fur vier- bis sechsjährige Kinder, M-ABC: Movement Assessment Battery for Children, PDMS: Peabody Development Scales, KTK: Körper-Koordinationtest für Kinder, TGDM: Test of Gross Motor Development, MMT: Maastrichtse Motoriektest, BOTMP: Bruininks-Oseretsky Test of Motor Proficiency. ICC: intraclass correlation coefficient, NR: not reported, GM: gross motor, LV: long version, SV: short version, LF: long form, SF: short form, STV: subtest version, SEMs: standard errors of measurement, TMQ: Total Motor Quotient, TMC: Total Motor Composite, CSSA: Comprehensive Scales of Student Abilities MSEL: Mullen Scales of Early learning: AGS Edition AUC: Areas under curve BC: Battery composite ROC: Receiver operating characteristic.", "title": "" }, { "docid": "9df6e9bd41b7a5c48f10cd542fa5e6d9", "text": "Many machine learning problems can be interpreted as learning for matching two types of objects (e.g., images and captions, users and products, queries and documents, etc.). The matching level of two objects is usually measured as the inner product in a certain feature space, while the modeling effort focuses on mapping of objects from the original space to the feature space. This schema, although proven successful on a range of matching tasks, is insufficient for capturing the rich structure in the matching process of more complicated objects. In this paper, we propose a new deep architecture to more effectively model the complicated matching relations between two objects from heterogeneous domains. More specifically, we apply this model to matching tasks in natural language, e.g., finding sensible responses for a tweet, or relevant answers to a given question. This new architecture naturally combines the localness and hierarchy intrinsic to the natural language problems, and therefore greatly improves upon the state-of-the-art models.", "title": "" }, { "docid": "7a06e93662213579dc6cd07e4160c6ca", "text": "This study proposes and evaluates an efficient real-time taxi dispatching strategy that solves the linear assignment problem to find a globally optimal taxi-to-request assignment at each decision epoch. The authors compare the assignment-based strategy with two popular rule-based strategies. They evaluate dispatching strategies in detail in the city of Berlin and the neighboring region of Brandenburg using the microscopic large-scale MATSim simulator. The assignment-based strategy produced better results for both drivers (less idle driving) and passengers (less waiting). However, computing the assignments for thousands of taxis in a huge road network turned out to be computationally demanding. Certain adaptations pertaining to the cost matrix calculation were necessary to increase the computational efficiency and assure real-time responsiveness.", "title": "" }, { "docid": "28a11e458f0c922e3354065c7f1feb8e", "text": "Diabetes mellitus (DM) is the most common of the endocrine disorders and represents a global health problem. DM is characterized by chronic hyperglycaemia due to relative or absolute lack of insulin or the actions of insulin. Insulin is the main treatment for patients with type 1 DM and it is also important in type 2 DM when blood glucose levels cannot be controlled by diet, weight loss, exercise and oral medications alone. Prior to the availability of insulin, dietary measures, including the traditional medicines derived from plants, were the major form of treatment. A multitude of plants have been used for the treatment of diabetes throughout the world. One such plant is Momordica charantia (Linn Family: Cucurbaceae), whose fruit is known as Karela or bittergourd. For a long time, several workers have studied the effects of this plant in DM. Treatment with M. charantia fruit juice reduced blood glucose levels, improved body weight and glucose tolerance. M. charantia fruit juice can also inhibit glucose uptake by the gut and stimulate glucose uptake by skeletal muscle cells. Moreover, the juice of this plant preserves islet β cells and β cell functions, normalises the systolic blood pressure, and modulates xenobiotic metabolism and oxidative stress. M. charantia also has anti-carcinogenic properties. In conclusion, M. charantia has tremendous beneficial values in the treatment of DM.", "title": "" }, { "docid": "1377bac68319fcc57fbafe6c21e89107", "text": "In recent years, robotics in agriculture sector with its implementation based on precision agriculture concept is the newly emerging technology. The main reason behind automation of farming processes are saving the time and energy required for performing repetitive farming tasks and increasing the productivity of yield by treating every crop individually using precision farming concept. Designing of such robots is modeled based on particular approach and certain considerations of agriculture environment in which it is going to work. These considerations and different approaches are discussed in this paper. Also, prototype of an autonomous Agriculture Robot is presented which is specifically designed for seed sowing task only. It is a four wheeled vehicle which is controlled by LPC2148 microcontroller. Its working is based on the precision agriculture which enables efficient seed sowing at optimal depth and at optimal distances between crops and their rows, specific for each crop type.", "title": "" }, { "docid": "a239f42e7212bd0967d417338106c6f6", "text": "The aim of this article is to present a new technique for augmentation of deficient alveolar ridges and/or correction of osseous defects around dental implants. Current knowledge regarding bone augmentation for treatment of osseous defects prior to and in combination with dental implant placement is critically appraised. The \"sandwich\" bone augmentation technique is demonstrated step by step. Five pilot cases with implant dehiscence defects averaging 10.5 mm were treated with the technique. At 6 months, the sites were uncovered, and complete defect fill was noted in all cases. Results from this pilot case study indicated that the sandwich bone augmentation technique appears to enhance the outcomes of bone augmentation by using the positive properties of each applied material (autograft, DFDBA, hydroxyapatite, and collagen membrane). Future clinical trials for comparison of this approach with other bone augmentation techniques and histologic evaluation of the outcomes are needed to validate these findings.", "title": "" }, { "docid": "d0b509f5776f7cdf3c4a108e0dfafd47", "text": "Motivated by the recent success in applying deep learning for natural image analysis, we designed an image segmentation system based on deep Convolutional Neural Network (CNN) to detect the presence of soft tissue sarcoma from multi-modality medical images, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). Multi-modality imaging analysis using deep learning has been increasingly applied in the field of biomedical imaging and brought unique value to medical applications. However, it is still challenging to perform the multi-modal analysis owing to a major difficulty that is how to fuse the information derived from different modalities. There exist varies of possible schemes which are application-dependent and lack of a unified framework to guide their designs. Aiming at lesion segmentation with multi-modality images, we innovatively propose a conceptual image fusion architecture for supervised biomedical image analysis. The architecture has been optimized by testing different fusion schemes within the CNN structure, including fusing at the feature learning level, fusing at the classifier level, and the fusing at the decision-making level. It is found from the results that while all the fusion schemes outperform the single-modality schemes, fusing at the feature level can generally achieve the best performance in terms of both accuracy and computational cost, but can also suffer from the decreased robustness due to the presence of large errors in one or more image modalities.", "title": "" }, { "docid": "2ab51bd16640532e17f19f9df3880a1a", "text": "monitor retail store shelves M. Marder S. Harary A. Ribak Y. Tzur S. Alpert A. Tzadok Using image analytics to monitor the contents and status of retail store shelves is an emerging trend with increasing business importance. Detecting and identifying multiple objects on store shelves involves a number of technical challenges. The particular nature of product package design, the arrangement of products on shelves, and the requirement to operate in unconstrained environments are just a few of the issues that must be addressed. We explain how we addressed these challenges in a system for monitoring planogram compliance, developed as part of a project with Tesco, a large multinational retailer. The new system offers store personnel an instant view of shelf status and a list of action items for restocking shelves. The core of the system is based on its ability to achieve high rates of product recognition, despite the very small visual differences between some products. This paper covers how state-of-the-art methods for object detection behave when applied to this problem. We also describe the innovative aspects of our implementation for size-scale-invariant product recognition and fine-grained classification.", "title": "" }, { "docid": "91dbb5df6bc5d3db43b51fc7a4c84468", "text": "An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres.\nAll algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented.", "title": "" }, { "docid": "cb997e2c09f6ca55203028f72ebcc7d5", "text": "This paper presents a set of procedures for detecting the primary embryo development of chicken eggs using Self-Organizing Mapping (SOM) technique and K-means clustering algorithm. Our strategy consists of preprocessing of an acquired color image with color space transformation, grouping the data by Self-Organizing Mapping technique and predicting the embryo development by K-means clustering method. In our experiment, the results show that our method is more efficient. Processing with this algorithm can indicate the period of chicken embryo in on hatching. By the accuracy of the algorithm depends on the adjustment the optimum number of iterative learning. For experiment the learning rate using the example of number 4 eggs, found that the optimum learning rate to be in the range of 0.1 to 0.5. And efficiency the optimum number of iterative learning to be in the range of 250 to 300 rounds.", "title": "" } ]
scidocsrr
85caea374eb33990173850c76879de15
Polynomial-time probabilistic reasoning with partial observations via implicit learning in probability logics
[ { "docid": "59a0eb620744f0e53d5a50fab0fcd708", "text": "Suppose that we wish to learn from examples and counterexamples a criterion for recognizing whether an assembly of wooden blocks constitutes an arch. Suppose also that we have preprogrammed recognizers for various relationships e.g. on-top-of(z, y), above(z, y), etc. and believe that some quantified expression in terms of these base relationships should suffice to approximate the desired notion of an arch. How can we formulate such a relational learning problem so as to exploit the benefits that are demonstrable in propositional learning, such as attribute-efficient learning by linear separators, and error-resilient learning ? We believe that learning in a general setting that allows for multiple objects and relations in this way is paradigmatic of the more fundamental questions that need to be addressed if one is to resolve the following dilemma that arises in the design of intelligent systems: Mathematical logic is an attractive language of description because it has clear semantics and sound proof procedures. However, as a basis for large programmed systems it leads to brittleness because, in practice, consistent usage of the various predicate names throughout a system cannot be guaranteed, except in application areas such as mathematics where the viability of the axiomatic method has been demonstrated independently. In this paper we develop the following approach to circumventing this problem. We suggest that brittleness can be overcome by using a new kind of logic in which each statement is learnable. By allowing thesystem to learn rules empirically from the environment, relative to any particular programs it may have for recognizing some base predicates, we enable the system to acquire a set of statements approximately consistent with each other and with the world, without the need for a globally knowledgeable and consistent p*OgrUllll~~. We illustrate this approach by describing a simple logic that hzs a sound and efficient proof procedure for reasoning about instances, and that is rendered robust by having the rules learnable. The complexity and accuracy of both ~o,,yri~,,, ACM ,999 l-581 13.067.8199105...%5.00 learning and deduction are provably polynomial bounded.", "title": "" } ]
[ { "docid": "e0776e4e73d63d75ba959972be601f6c", "text": "Mini-batch stochastic gradient methods are state of the art for distributed training of deep neural networks. In recent years, a push for efficiency for large-scale applications has lead to drastically large mini-batch sizes. However, two significant roadblocks remain for such large-batch variants. On one hand, increasing the number of workers introduces communication bottlenecks, and efficient algorithms need to be able to adapt to the changing computation vs. communication tradeoffs in heterogeneous systems. On the other hand, independent of communication, large-batch variants do not generalize well. We argue that variants of recently proposed local SGD, which performs several update steps on a local model before communicating with other workers can solve both these problems. Our experiments show performance gains in training efficiency, scalability, and adaptivity to the underlying system resources. We propose a variant, postlocal SGD that significantly improves the generalization performance of large batch sizes while reducing communication. Additionally, post-local SGD converges to flatter minima as opposed to large-batch methods, which can be understood by relating of local SGD to noise injection. Thus, local SGD is an enticing alternative to large-batch SGD.", "title": "" }, { "docid": "9ba3fb8585c674003494c6c17abe9563", "text": "s grammatical structure from all irrelevant contexts, from its", "title": "" }, { "docid": "3ea0e0ee7061184ebc81f79695ac717b", "text": "In OMS patients [Figure 1b], the most important pathology change is the loosen of IT tendon sheath.[3] After that, the OH becomes short and fibrosis because of the disuse atrophy. When the patient swallows, the OH cannot be extended, the IT moved laterally and superiorly. The posterior clavicle margin of OH replace IT as a new origin of force, When the patient swallow, the shorten OH like a string, form an X‐shaped tent to elevate the SCM in the lateral neck during upward movement of the hyoid bone. The elevated SCM formed the mass in the neck.", "title": "" }, { "docid": "4417f505ed279689afa0bde104b3d472", "text": "A single-cavity dual-mode substrate integrated waveguide (SIW) bandpass filter (BPF) for X-band application is presented in this paper. Coplanar waveguide (CPW) is used as SIW-microstrip transition in this design. Two slots of the CPW with unequal lengths are used to excite two degenerate modes, i.e. TE102 and TE201. A slot line is etched on the ground plane of the SIW cavity for perturbation. Its size and position are related to the effect of mode-split, namely the coupling between the two degenerate modes. Due to the cancellation of the two modes, a transmission zero in the lower stopband of the BPF is achieved, which improves the selectivity of the proposed BPF. And the location of the transmission zero can be controlled by adjusting the position and the size of the slot line perturbation properly. By introducing source-load coupling, an additional transmission zero is produced in the upper stopband of the BPF, it enhances the stopband performance of the BPF. Influences of the slot line perturbation on the BPF have been studied. A dual-mode BPF for X-band application has been designed, fabricated and measured. A good agreement between simulation and measurement verifies the validity of this design methodology.", "title": "" }, { "docid": "e41c55eb50120c780b6e66df4cfc2e05", "text": "Nanowire (NW) devices, particularly the gate-all-around (GAA) CMOS architecture, have emerged as the front-runner for pushing CMOS scaling beyond the roadmap. These devices offer unique advantages over their planar counterparts which make them feasible as an option for 22 -nm and beyond technology nodes. This paper reviews the current technology status for realizing the GAA NW device structures and their applications in logic circuit and nonvolatile memories. We also take a glimpse into applications of NWs in the ldquomore-than-Moorerdquo regime and briefly discuss the application of NWs as biochemical sensors. Finally, we summarize the status and outline the challenges and opportunities of the NW technology.", "title": "" }, { "docid": "34d8bd1dd1bbe263f04433a6bf7d1b29", "text": "algorithms for image processing and computer vision algorithms for image processing and computer vision exploring computer vision and image processing algorithms free ebooks algorithms for image processing and computer parallel algorithms for digital image processing computer algorithms for image processing and computer vision pdf algorithms for image processing and computer vision computer vision: algorithms and applications brown gpu algorithms for image processing and computer vision high-end computer vision algorithms image processing handbook of computer vision algorithms in image algebra the university of cs 4487/9587 algorithms for image analysis an analysis of rigid image alignment computer vision computer vision with matlab massachusetts institute of handbook of computer vision algorithms in image algebra tips and tricks for image processing and computer vision limitations of human vision what is computer vision algorithms for image processing and computer vision gbv algorithms for image processing and computer vision. 2nd computer vision for nanoscale imaging algorithms for image processing and computer vision a survey of distributed computer vision algorithms computer vision: algorithms and applications sci home algorithms for image processing and computer vision ebook engineering of computer vision algorithms using algorithms for image processing and computer vision by j real-time algorithms: prom signal processing to computer expectationmaximization algorithms for image processing automated techniques for detection and recognition of algorithms for image processing and computer vision dictionary of computer vision and image processing implementing video image processing algorithms on fpga open source libraries for image processing computer vision and image processing: a practical approach computer vision i algorithms and applications: image algorithms for image processing and computer vision algorithms for image processing and computer vision j. r", "title": "" }, { "docid": "5916e605ab78bf75925fecbdc55422cd", "text": "This paper presents a new method for estimating the average heart rate from a foot/ankle worn photoplethysmography (PPG) sensor during fast bike activity. Placing the PPG sensor on the lower half of the body allows more energy to be collected from energy harvesting in order to give a power autonomous sensor node, but comes at the cost of introducing significant motion interference into the PPG trace. We present a normalised least mean square adaptive filter and short-time Fourier transform based algorithm for estimating heart rate in the presence of this motion contamination. Results from 8 subjects show the new algorithm has an average error of 9 beats-per-minute when compared to an ECG gold standard.", "title": "" }, { "docid": "4d4c0d5a0abcd38aff2ba514f080edc0", "text": "We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of classification performance. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. First, we pose an adaptive network evaluation scheme, where we learn a system to adaptively choose the components of a deep network to be evaluated for each example. By allowing examples correctly classified using early layers of the system to exit, we avoid the computational time associated with full evaluation of the network. Building upon this approach, we then learn a network selection system that adaptively selects the network to be evaluated for each example. We exploit the fact that many examples can be correctly classified using relatively efficient networks and that complex, computationally costly networks are only necessary for a small fraction of examples. By avoiding evaluation of these complex networks for a large fraction of examples, computational time can be dramatically reduced. Empirically, these approaches yield dramatic reductions in computational cost, with up to a 2.8x speedup on state-of-the-art networks from the ImageNet image recognition challenge with minimal (less than 1%) loss of accuracy.", "title": "" }, { "docid": "7985e61fc9a4fa1d92fa6fafd4747ff2", "text": "A single-ended InP transimpedance amplifier (TIA) for next generation high-bandwidth optical fiber communication systems is presented. The TIA exhibits 48 dB-Omega transimpedance and has a 3-dB bandwidth of 92 GHz. The input-referred current noise is 20 pA/radicHz and the transimpedance group delay is below 10 ps over the entire measured frequency range.", "title": "" }, { "docid": "9ee78ad640b8c876dc31c863e4114751", "text": "Cognitive linguistics has emerged in the last twenty-five years as a powerful approach to the study of language, conceptual systems, human cognition, and general meaning construction. It addresses within language the structuring of basic conceptual categories such as space and time, scenes and events, entities and processes, motion and location, force and causation. It addresses the structuring of ideational and affective categories attributed to cognitive agents, such as attention and perspective, volition and intention. 1 In doing so, it develops a rich conception of grammar that reflects fundamental cognitive abilities: the ability to form structured conceptualizations with multiple levels of organization, to conceive of a situation at varying levels of abstraction, to establish correspondences between facets of different structures, and to construe the same situation in alternate ways. 2 Cognitive linguistics recognizes that the study of language is the study of language use and that when we engage in any language activity, we draw unconsciously on vast cognitive and cultural resources, call up models and frames, set up multiple connections, coordinate large arrays of information, and engage in creative mappings, transfers, and elaborations. Language does not", "title": "" }, { "docid": "491d98644c62c6b601657e235cb48307", "text": "The purpose of this study was to investigate the use of three-dimensional display formats for judgments of spatial information using an exocentric frame of reference. Eight subjects judged the azimuth and elevation that separated two computer-generated objects using either a perspective or stereoscopic display. Errors, which consisted of the difference in absolute value between the estimated and actual azimuth or elevation, were analyzed as the response variable. The data indicated that the stereoscopic display resulted in more accurate estimates of elevation, especially for images aligned approximately orthogonally to the viewing vector. However, estimates of relative azimuth direction were not improved by use of the stereoscopic display. Furthermore, it was shown that the effect of compression resulting from a 45-deg computer graphics eye point elevation produced a response bias that was symmetrical around the horizontal plane of the reference cube, and that the depth cue of binocular disparity provided by the stereoscopic display reduced the magnitude of the compression errors. Implications of the results for the design of spatial displays are discussed.", "title": "" }, { "docid": "49329aef5ac732cc87b3cc78520c7ff5", "text": "This paper surveys the previous and ongoing research on surface electromyogram (sEMG) signal processing implementation through various hardware platforms. The development of system that incorporates sEMG analysis capability is essential in rehabilitation devices, prosthesis arm/limb and pervasive healthcare in general. Most advanced EMG signal processing algorithms rely heavily on computational resource of a PC that negates the elements of portability, size and power dissipation of a pervasive healthcare system. Signal processing techniques applicable to sEMG are discussed with aim for proper execution in platform other than full-fledge PC. Performance and design parameters issues in some hardware implementation are also being pointed up. The paper also outlines the trends and alternatives solutions in developing portable and efficient EMG signal processing hardware.", "title": "" }, { "docid": "649797f21efa24c523361afee80419c5", "text": "Web search engines typically provide search results without considering user interests or context. We propose a personalized search approach that can easily extend a conventional search engine on the client side. Our mapping framework automatically maps a set of known user interests onto a group of categories in the Open Directory Project (ODP) and takes advantage of manually edited data available in ODP for training text classifiers that correspond to, and therefore categorize and personalize search results according to user interests. In two sets of controlled experiments, we compare our personalized categorization system (PCAT) with a list interface system (LIST) that mimics a typical search engine and with a nonpersonalized categorization system (CAT). In both experiments, we analyze system performances on the basis of the type of task and query length. We find that PCAT is preferable to LIST for information gathering types of tasks and for searches with short queries, and PCAT outperforms CAT in both information gathering and finding types of tasks, and for searches associated with free-form queries. From the subjects' answers to a questionnaire, we find that PCAT is perceived as a system that can find relevant Web pages quicker and easier than LIST and CAT.", "title": "" }, { "docid": "9eccf674ee3b3826b010bc142ed24ef0", "text": "We present an architecture of a recurrent neural network (RNN) with a fullyconnected deep neural network (DNN) as its feature extractor. The RNN is equipped with both causal temporal prediction and non-causal look-ahead, via auto-regression (AR) and moving-average (MA), respectively. The focus of this paper is a primal-dual training method that formulates the learning of the RNN as a formal optimization problem with an inequality constraint that provides a sufficient condition for the stability of the network dynamics. Experimental results demonstrate the effectiveness of this new method, which achieves 18.86% phone recognition error on the TIMIT benchmark for the core test set. The result approaches the best result of 17.7%, which was obtained by using RNN with long short-term memory (LSTM). The results also show that the proposed primal-dual training method produces lower recognition errors than the popular RNN methods developed earlier based on the carefully tuned threshold parameter that heuristically prevents the gradient from exploding.", "title": "" }, { "docid": "525182fb2d7c2d6b4e99317bc4e43fff", "text": "This paper proposes a dual-rotor, toroidal-winding, axial-flux vernier permanent magnet (VPM) machine. By the combination of toroidal windings with the rotor-stator-rotor topology, the end winding length of the machine is significantly reduced when compared with the regular VPM machine. Based on the airgap permeance function, the back-EMF and torque expressions are derived, through which the nature of this machine is revealed. The influence of pole ratio (ratio of rotor pole pair number to stator pole pair number) and main geometric parameters such as slot opening, magnet thickness etc., on torque performance is then analytically investigated. Both the quasi-3-dimensional (quasi-3D) finite element analysis (FEA) and 3D FEA are applied to verify the theoretical analysis. With the current density of 4.2 A/mm2, the torque density of the proposed machine can reach 32.6 kNm/m3. A prototype has been designed and is in manufacturing process. Experimental validation will be presented in the future.", "title": "" }, { "docid": "efd91eb40476a15cb472a2331765bb29", "text": "Online travel portals are becoming important parts for sharing travel information. User generated content and information in user reviews is valuable to both travel companies and to other people and can have a substantial impact on their decision making process. The automatic analysis of used generated reviews can provide a deeper understanding of users attitudes and opinions. In this paper, we present a work on the automatic analysis of user reviews on the booking.com portal and the automatic extraction and visualization of information. An aspect based approach is followed where latent dirichlet allocation is utilized in order to model topic opinion and natural language processing techniques are used to specify the dependencies on a sentence level and determine interactions between words and aspects. Then Naïve Bayes machine learning method is used to recognize the polarity of the user’s opinion utilizing the sentence’s dependency triples. To evaluate the performance of our method, we collected a wide set of reviews for a series of hotels from booking.com. The results from the evaluation study are very encouraging and indicate that the system is fast, scalable and most of all accurate in analyzing user reviews and in specifying users’ opinions and stance towards the characteristics of the hotels and can provide comprehensive hotel information.", "title": "" }, { "docid": "067ec456d76cce7978b3d2f0c67269ed", "text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.", "title": "" }, { "docid": "8bea1f9e107cfcebc080bc62d7ac600d", "text": "The introduction of wireless transmissions into the data center has shown to be promising in improving cost effectiveness of data center networks DCNs. For high transmission flexibility and performance, a fundamental challenge is to increase the wireless availability and enable fully hybrid and seamless transmissions over both wired and wireless DCN components. Rather than limiting the number of wireless radios by the size of top-of-rack switches, we propose a novel DCN architecture, Diamond, which nests the wired DCN with radios equipped on all servers. To harvest the gain allowed by the rich reconfigurable wireless resources, we propose the low-cost deployment of scalable 3-D ring reflection spaces RRSs which are interconnected with streamlined wired herringbone to enable large number of concurrent wireless transmissions through high-performance multi-reflection of radio signals over metal. To increase the number of concurrent wireless transmissions within each RRS, we propose a precise reflection method to reduce the wireless interference. We build a 60-GHz-based testbed to demonstrate the function and transmission ability of our proposed architecture. We further perform extensive simulations to show the significant performance gain of diamond, in supporting up to five times higher server-to-server capacity, enabling network-wide load balancing, and ensuring high fault tolerance.", "title": "" }, { "docid": "b205efe2ce90ec2ee3a394dd01202b60", "text": "Recurrent Neural Networks (RNNs) is a sub type of neural networks that use feedback connections. Several types of RNN models are used in predicting financial time series. This study was conducted to develop models to predict daily stock prices of selected listed companies of Colombo Stock Exchange (CSE) based on Recurrent Neural Network (RNN) Approach and to measure the accuracy of the models developed and identify the shortcomings of the models if present. Feedforward, Simple Recurrent Neural Network (SRNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM) architectures were employed in building models. Closing, High and Low prices of past two days were selected as input variables for each company. Feedforward networks produce the highest and lowest forecasting errors. The forecasting accuracy of the best feedforward networks is approximately 99%. SRNN and LSTM networks generally produce lower errors compared with feedforward networks but in some occasions, the error is higher than feed forward networks. Compared to other two networks, GRU networks are having comparatively higher forecasting errors.", "title": "" }, { "docid": "ab101c577fcdefb7ed09b02c563ccdf4", "text": "Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show— theoretically, via simulation, and through experiments on real user data—that de-identified web browsing histories can be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world e↵ectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on su ciently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is—to our knowledge—the largestscale demonstrated de-anonymization to date. CCS Concepts •Security and privacy ! Pseudonymity, anonymity and untraceability; •Information systems ! Online advertising; Social networks; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. c 2017 ACM. ISBN TDB. DOI: TBD", "title": "" } ]
scidocsrr
33502608f9a178d91833b0daaf5d11fb
Low Cross-Polarization Vivaldi Arrays
[ { "docid": "801a197f630189ab0a9b79d3cbfe904b", "text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.", "title": "" }, { "docid": "cbdace4636017f925b89ecf266fde019", "text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.", "title": "" } ]
[ { "docid": "9159ffb919402640381775f76b701ac8", "text": "With the vigorous development of the World Wide Web, many large-scale knowledge bases (KBs) have been generated. To improve the coverage of KBs, an important task is to integrate the heterogeneous KBs. Several automatic alignment methods have been proposed which achieve considerable success. However, due to the inconsistency and uncertainty of large-scale KBs, automatic techniques for KBs alignment achieve low quality (especially recall). Thanks to the open crowdsourcing platforms, we can harness the crowd to improve the alignment quality. To achieve this goal, in this paper we propose a novel hybrid human-machine framework for large-scale KB integration. We rst partition the entities of different KBs into many smaller blocks based on their relations. We then construct a partial order on these partitions and develop an inference model which crowdsources a set of tasks to the crowd and infers the answers of other tasks based on the crowdsourced tasks. Next we formulate the question selection problem, which, given a monetary budget B, selects B crowdsourced tasks to maximize the number of inferred tasks. We prove that this problem is NP-hard and propose greedy algorithms to address this problem with an approximation ratio of 1--1/e. Our experiments on real-world datasets indicate that our method improves the quality and outperforms state-of-the-art approaches.", "title": "" }, { "docid": "c0d4f81bb55e1578f2a11dc712937a80", "text": "Recognizing mathematical expressions in PDF documents is a new and important field in document analysis. It is quite different from extracting mathematical expressions in image-based documents. In this paper, we propose a novel method by combining rule-based and learning-based methods to detect both isolated and embedded mathematical expressions in PDF documents. Moreover, various features of formulas, including geometric layout, character and context content, are used to adapt to a wide range of formula types. Experimental results show satisfactory performance of the proposed method. Furthermore, the method has been successfully incorporated into a commercial software package for large-scale Chinese e-Book production.", "title": "" }, { "docid": "39daa09f2e57903abe1109335127d4b9", "text": "Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named ‘SPARK’ has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.", "title": "" }, { "docid": "ec03f26e8a4708c8e9f839b3006d0231", "text": "We propose an automatic diabetic retinopathy (DR) analysis algorithm based on two-stages deep convolutional neural networks (DCNN). Compared to existing DCNN-based DR detection methods, the proposed algorithm have the following advantages: (1) Our method can point out the location and type of lesions in the fundus images, as well as giving the severity grades of DR. Moreover, since retina lesions and DR severity appear with different scales in fundus images, the integration of both local and global networks learn more complete and specific features for DR analysis. (2) By introducing imbalanced weighting map, more attentions will be given to lesion patches for DR grading, which significantly improve the performance of the proposed algorithm. In this study, we label 12, 206 lesion patches and re-annotate the DR grades of 23, 595 fundus images from Kaggle competition dataset. Under the guidance of clinical ophthalmologists, the experimental results show that our local lesion detection net achieve comparable performance with trained human observers, and the proposed imbalanced weighted scheme also be proved to significantly improve the capability of our DCNN-based DR grading algorithm.", "title": "" }, { "docid": "065e6db1710715ce5637203f1749e6f6", "text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.", "title": "" }, { "docid": "350868c68de72786866173c2f6e8ae90", "text": "We introduce kernel entropy component analysis (kernel ECA) as a new method for data transformation and dimensionality reduction. Kernel ECA reveals structure relating to the Renyi entropy of the input space data set, estimated via a kernel matrix using Parzen windowing. This is achieved by projections onto a subset of entropy preserving kernel principal component analysis (kernel PCA) axes. This subset does not need, in general, to correspond to the top eigenvalues of the kernel matrix, in contrast to the dimensionality reduction using kernel PCA. We show that kernel ECA may produce strikingly different transformed data sets compared to kernel PCA, with a distinct angle-based structure. A new spectral clustering algorithm utilizing this structure is developed with positive results. Furthermore, kernel ECA is shown to be an useful alternative for pattern denoising.", "title": "" }, { "docid": "53651510ff526a81650a6627db29d88e", "text": "Motivated by the recent and growing interest in smart grid technology, we study the operation of DC/AC inverters in an inductive microgrid. We show that a network of loads and DC/AC inverters equipped with power-frequency droop controllers can be cast as a Kuramoto model of phase-coupled oscillators. This novel description, together with results from the theory of coupled oscillators, allows us to characterize the behavior of the network of inverters and loads. Specifically, we provide a necessary and sufficient condition for the existence of a synchronized solution that is unique and locally exponentially stable. We present a selection of controller gains leading to a desirable sharing of power among the inverters, and specify the set of loads which can be serviced without violating given actuation constraints. Moreover, we propose a distributed integral controller based on averaging algorithms, which dynamically regulates the system frequency in the presence of a time-varying load. Remarkably, this distributed-averaging integral controller has the additional property that it preserves the power sharing properties of the primary droop controller. Our results hold without assumptions on identical line characteristics or voltage magnitudes.", "title": "" }, { "docid": "f0da127d64aa6e9c87d4af704f049d07", "text": "The introduction of the blue-noise spectra-high-frequency white noise with minimal energy at low frequencies-has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray. The blue-noise model, however, does not directly translate to printing with multiple ink intensities. New multilevel printing and display technologies require the development of corresponding quantization algorithms for continuous tone images, namely multitoning. In order to define an optimal distribution of multitone pixels, this paper develops the theory and design of multitone, blue-noise dithering. Here, arbitrary multitone dot patterns are modeled as a layered superposition of stack-constrained binary patterns. Multitone blue-noise exhibits minimum energy at low frequencies and a staircase-like, ascending, spectral pattern at higher frequencies. The optimum spectral profile is described by a set of principal frequencies and amplitudes whose calculation requires the definition of a spectral coherence structure governing the interaction between patterns of dots of different intensities. Efficient algorithms for the generation of multitone, blue-noise dither patterns are also introduced.", "title": "" }, { "docid": "7a005d66591330d6fdea5ffa8cb9020a", "text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.", "title": "" }, { "docid": "6b6fd5bfbe1745a49ce497490cef949d", "text": "This paper investigates optimal power allocation strategies over a bank of independent parallel Gaussian wiretap channels where a legitimate transmitter and a legitimate receiver communicate in the presence of an eavesdropper and an unfriendly jammer. In particular, we formulate a zero-sum power allocation game between the transmitter and the jammer where the payoff function is the secrecy rate. We characterize the optimal power allocation strategies as well as the Nash equilibrium in some asymptotic regimes. We also provide a set of results that cast further insight into the problem. Our scenario, which is applicable to current OFDM communications systems, demonstrates that transmitters that adapt to jammer experience much higher secrecy rates than non-adaptive transmitters.", "title": "" }, { "docid": "841b7e21447c848fd999f9237818e52d", "text": "High-frequency B-mode images of 19 fresh human liver samples were obtained to evaluate their usefulness in determining the steatosis grade. The images were acquired by a mechanically controlled singlecrystal probe at 25 MHz. Image features derived from gray-level concurrence and nonseparable wavelet transform were extracted to classify steatosis grade using a classifier known as the support vector machine. A subsequent histologic examination of each liver sample graded the steatosis from 0 to 3. The four grades were then combined into two, three and four classes. The classification results were correlated with histology. The best classification accuracies of the two, three and four classes were 90.5%, 85.8% and 82.6%, respectively, which were markedly better than those at 7 MHz. These results indicate that liver steatosis can be more accurately characterized using high-frequency B-mode ultrasound. Limitations and their potential solutions of applying high-frequency ultrasound to liver imaging are also discussed. (E-mail: paichi@cc.ee.ntu.edu.tw) © 2005 World Federation for Ultrasound in Medicine & Biology.", "title": "" }, { "docid": "3cd565192b29593550032f695b61087c", "text": "Forcing occurs when a magician influences the audience's decisions without their awareness. To investigate the mechanisms behind this effect, we examined several stimulus and personality predictors. In Study 1, a magician flipped through a deck of playing cards while participants were asked to choose one. Although the magician could influence the choice almost every time (98%), relatively few (9%) noticed this influence. In Study 2, participants observed rapid series of cards on a computer, with one target card shown longer than the rest. We expected people would tend to choose this card without noticing that it was shown longest. Both stimulus and personality factors predicted the choice of card, depending on whether the influence was noticed. These results show that combining real-world and laboratory research can be a powerful way to study magic and can provide new methods to study the feeling of free will.", "title": "" }, { "docid": "fc9600d092289fc25e6c7a9307d2f962", "text": "Two new approaches in suction lipectomy of the buttocks region are described: liposuction of the “banana” and liposuction of the “sensuous triangle.” The banana is the highest part of the posterior thigh just below the buttocks crease. It appears only in certain individuals and appears as a buldge under the buttocks crease. The preferred approach to liposuction of the banana is discussed as well as a theory as to its etiology. A common complication of liposuction of this area is ptosis of the buttocks crease. Methods of treatment of this complication are also discussed. The second topic addressed is liposuction of the sensuous triangle which is at the junction of the lateral buttocks, lateral thigh, and posterior thigh. The result of suctioning this area is a more athletic-appearing buttocks region.", "title": "" }, { "docid": "e141a1c5c221aa97db98534b339694cb", "text": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the ‘‘fit’’ between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "79593cc56da377d834f33528b833641f", "text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.", "title": "" }, { "docid": "9bf6a35f056ad18c5cc44717798afa03", "text": "Big Data, and its 4 Vs – volume, velocity, variety, and veracity – have been at the forefront of societal, scientific and engineering discourse. Arguably the most important 5th V, value, is not talked about as much. How can we make sure that our data is not just big, but also valuable? WebDB 2015 has as its theme “Freshness, Correctness, Quality of Information and Knowledge on the Web”. The workshop attracted 31 submissions, of which the best 9 were selected for presentation at the workshop, and for publication in the proceedings. To set the stage, we have interviewed several prominent members of the data management community, soliciting their opinions on how we can ensure that data is not just available in quantity, but also in quality. In this interview Serge Abiteboul, Oren Etzioni, Divesh Srivastava with Luna Dong, and Gerhard Weikum shared with us their motivation for doing research in the area of data quality, and discussed their current work and their view on the future of the field. This interview appeared as a SIGMOD Blog article.", "title": "" }, { "docid": "ecb93affc7c9b0e4bf86949d3f2006d4", "text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).", "title": "" }, { "docid": "78e49f4e38dbafb51269fee46b8ace74", "text": "In this paper, we are concerned with image downsampling using subpixel techniques to achieve superior sharpness for small liquid crystal displays (LCDs). Such a problem exists when a high-resolution image or video is to be displayed on low-resolution display terminals. Limited by the low-resolution display, we have to shrink the image. Signal-processing theory tells us that optimal decimation requires low-pass filtering with a suitable cutoff frequency, followed by downsampling. In doing so, we need to remove many useful image details causing blurring. Subpixel-based downsampling, taking advantage of the fact that each pixel on a color LCD is actually composed of individual red, green, and blue subpixel stripes, can provide apparent higher resolution. In this paper, we use frequency-domain analysis to explain what happens in subpixel-based downsampling and why it is possible to achieve a higher apparent resolution. According to our frequency-domain analysis and observation, the cutoff frequency of the low-pass filter for subpixel-based decimation can be effectively extended beyond the Nyquist frequency using a novel antialiasing filter. Applying the proposed filters to two existing subpixel downsampling schemes called direct subpixel-based downsampling (DSD) and diagonal DSD (DDSD), we obtain two improved schemes, i.e., DSD based on frequency-domain analysis (DSD-FA) and DDSD based on frequency-domain analysis (DDSD-FA). Experimental results verify that the proposed DSD-FA and DDSD-FA can provide superior results, compared with existing subpixel or pixel-based downsampling methods.", "title": "" }, { "docid": "a95ca56f64150700cd899a5b0ee1c4b8", "text": "Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ’witness’ of a criminal event. Forensic investigations include recovery, analysis and presentation of information stored in digital devices and related to computer crimes. These activities often involve the adoption of a wide range of imaging and analysis tools and the application of different techniques on different devices, with the consequence that the reconstruction and presentation activities result complicated. This work presents a method, based on Semantic Web technologies, that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstruction of events or actions in order to reach case conclusions.", "title": "" }, { "docid": "27a08efaa366023b45d5f187ac772ece", "text": "Erdheim-Chester disease (ECD) is a rare, multiorgan, non-Langerhans cell histiocytosis of uncertain origin, characterized by systemic xanthogranulomatous infiltration from CD68+CD1a- histiocytes. Skeletal involvement is present in up to 96% of cases with bilateral osteosclerosis of meta-diaphysis of long bones. Furthermore, in more than 50% of cases there is 1 extraskeletal manifestation. In this case report, we describe an interesting case of ECD with an extensive pan-cardiac and vascular involvement, in addition to skeletal, retro-orbital, and retroperitoneum one.A 44-year-old woman with a long history of exophthalmos referred to our hospital for elective surgical orbital decompression. At preoperative examinations a large pericardial effusion was discovered. Echocardiography, computed tomography (CT), and magnetic resonance imaging (MRI) described an inhomogeneous mass involving pericardium and the right heart, abdominal aorta and its main branches and the retroperitoneum, suggestive for a systemic inflammatory disorder. Histological examination on a biopsy sample confirmed the diagnosis of ECD. Radiology showed the pathognomonic long-bone involvement. Surgical orbital decompression was performed and medical therapy with interferon-α (INF-α) was started.Among extraskeletal manifestations of ECD, cardiovascular involvement is often asymptomatic and thus under-diagnosed but linked to poor prognosis. This is why clinician should always look for it when a new case of ECD is diagnosed.", "title": "" } ]
scidocsrr
e316e77896201f3eeccccb8f91285875
Event Detection in Twitter using Aggressive Filtering and Hierarchical Tweet Clustering
[ { "docid": "bae6a214381859ac955f1651c7df0c0f", "text": "The fastcluster package is a C++ library for hierarchical, agglomerative clustering. It provides a fast implementation of the most efficient, current algorithms when the input is a dissimilarity index. Moreover, it features memory-saving routines for hierarchical clustering of vector data. It improves both asymptotic time complexity (in most cases) and practical performance (in all cases) compared to the existing implementations in standard software: several R packages, MATLAB, Mathematica, Python with SciPy. The fastcluster package presently has interfaces to R and Python. Part of the functionality is designed as a drop-in replacement for the methods hclust and flashClust in R and scipy.cluster.hierarchy.linkage in Python, so that existing programs can be effortlessly adapted for improved performance.", "title": "" }, { "docid": "38d7107de35f3907c0e42b111883613e", "text": "On-line social networks have become a massive communication and information channel for users world-wide. In particular, the microblogging platform Twitter, is characterized by short-text message exchanges at extremely high rates. In this type of scenario, the detection of emerging topics in text streams becomes an important research area, essential for identifying relevant new conversation topics, such as breaking news and trends. Although emerging topic detection in text is a well established research area, its application to large volumes of streaming text data is quite novel. Making scalability, efficiency and rapidness, the key aspects for any emerging topic detection algorithm in this type of environment.\n Our research addresses the aforementioned problem by focusing on detecting significant and unusual bursts in keyword arrival rates or bursty keywords. We propose a scalable and fast on-line method that uses normalized individual frequency signals per term and a windowing variation technique. This method reports keyword bursts which can be composed of single or multiple terms, ranked according to their importance. The average complexity of our method is O(n log n), where n is the number of messages in the time window. This complexity allows our approach to be scalable for large streaming datasets. If bursts are only detected and not ranked, the algorithm remains with lineal complexity O(n), making it the fastest in comparison to the current state-of-the-art. We validate our approach by comparing our performance to similar systems using the TREC Tweet 2011 Challenge tweets, obtaining 91% of matches with LDA, an off-line gold standard used in similar evaluations. In addition, we study Twitter messages related to the SuperBowl football events in 2011 and 2013.", "title": "" } ]
[ { "docid": "94919f204eb51066b7a647d8257d9f08", "text": "Self-delimiting (SLIM) programs are a central concept of theoretical computer science, particularly algorithmic information & probability theory, and asymptotically optimal program search (AOPS). To apply AOPS to (possibly recurrent) neural networks (NNs), I introduce SLIM NNs. A typical SLIM NN is a general parallel-sequential computer. Its neurons have threshold activation functions. Its output neu-rons may affect the environment, which may respond with new inputs. During a computational episode, activations are spreading from input neurons through the SLIM NN until the computation activates a special halt neuron. Weights of the NN's used connections define its program. Halting programs form a prefix code. An episode may never activate most neurons, and hence never even consider their outgoing connections. So we trace only neurons and connections used at least once. With such a trace, the reset of the initial NN state does not cost more than the latest program execution. This by itself may speed up traditional NN implementations. To efficiently change SLIM NN weights based on experience, any learning algorithm (LA) should ignore all unused weights. Since prefixes of SLIM programs influence their suffixes (weight changes occurring early in an episode influence which weights are considered later), SLIM NN LAs should execute weight changes online during activation spreading. This can be achieved by applying AOPS to growing SLIM NNs. Since SLIM NNs select their own task-dependent effective size (=number of used free parameters), they have a built-in way of addressing overfitting, with the potential of effectively becoming small and slim whenever this is beneficial. To efficiently teach a SLIM NN to solve many tasks, such as correctly classifying many different patterns, or solving many different robot control tasks, each connection keeps a list of tasks it is used for. The lists may be efficiently updated during training. To evaluate the overall effect of currently tested weight changes, a SLIM NN LA needs to re-test performance only on the efficiently computable union of tasks potentially affected by the current weight changes. Search spaces of many existing LAs (such as hill climbing and neuro-evolution) can be greatly reduced by obeying restrictions of SLIM NNs. Future SLIM NNs will be implemented on 3-dimensional brain-like multi-processor hardware. Their LAs will minimize task-specific total wire length of used connections, to encourage efficient solutions of subtasks by subsets of neurons that are physically close. The novel class of SLIM NN LAs is currently being probed in ongoing experiments …", "title": "" }, { "docid": "e8d0b295658e582e534b9f41b1f14b25", "text": "The rapid development of artificial intelligence has brought the artificial intelligence threat theory as well as the problem about how to evaluate the intelligence level of intelligent products. Both need to find a quantitative method to evaluate the intelligence level of intelligence systems, including human intelligence. Based on the standard intelligence system and the extended Von Neumann architecture, this paper proposes General IQ, Service IQ and Value IQ evaluation methods for intelligence systems, depending on different evaluation purposes. Among them, the General IQ of intelligence systems is to answer the question of whether \"the artificial intelligence can surpass the human intelligence\", which is reflected in putting the intelligence systems on an equal status and conducting the unified evaluation. The Service IQ and Value IQ of intelligence systems are used to answer the question of “how the intelligent products can better serve the human”, reflecting the intelligence and required cost of each intelligence system as a product in the process of serving human. 0. Background With AlphaGo defeating the human Go champion Li Shishi in 2016[1], the worldwide artificial intelligence is developing rapidly. As a result, the artificial intelligence threat theory is widely disseminated as well. At the same time, the intelligent products are flourishing and emerging. Can the artificial intelligence surpass the human intelligence? What level exactly does the intelligence of these intelligent products reach? To answer these questions requires a quantitative method to evaluate the development level of intelligence systems. Since the introduction of the Turing test in 1950, scientists have done a great deal of work on the evaluation system for the development of artificial intelligence[2]. In 1950, Turing proposed the famous Turing experiment, which can determine whether a computer has the intelligence equivalent to that of human with questioning and human judgment method. As the most widely used artificial intelligence test method, the Turing test does not test the intelligence development level of artificial intelligence, but only judges whether the intelligence system can be the same with human intelligence, and depends heavily on the judges’ and testees’ subjective judgments due to too much interference from human factors, so some people often claim their ideas have passed the Turing test, even without any strict verification. On March 24, 2015, the Proceedings of the National Academy of Sciences (PNAS) published a paper proposing a new Turing test method called “Visual Turing test”, which was designed to perform a more in-depth evaluation on the image cognitive ability of computer[3]. In 2014, Mark O. Riedl of the Georgia Institute of Technology believed that the essence of intelligence lied in creativity. He designed a test called Lovelace version 2.0. The test range of Lovelace 2.0 includes the creation of a virtual story novel, poetry, painting and music[4]. There are two problems in various solutions including the Turing test in solving the artificial intelligence quantitative test. Firstly, these test methods do not form a unified intelligent model, nor do they use the model as a basis for analysis to distinguish multiple categories of intelligence, which leads to that it is impossible to test different intelligence systems uniformly, including human; secondly, these test methods can not quantitatively analyze artificial intelligence, or only quantitatively analyze some aspects of intelligence. But what percentage does this system reach to human intelligence? How’s its ratio of speed to the rate of development of human intelligence? All these problems are not covered in the above study. In response to these problems, the author of this paper proposes that: There are three types of IQs in the evaluation of intelligence level for intelligence systems based on different purposes, namely: General IQ, Service IQ and Value IQ. The theoretical basis of the three methods and IQs for the evaluation of intelligence systems, detailed definitions and evaluation methods will be elaborated in the following. 1. Theoretical Basis: Standard Intelligence System and Extended Von Neumann Architecture People are facing two major challenges in evaluating the intelligence level of an intelligence system, including human beings and artificial intelligence systems. Firstly, artificial intelligence systems do not currently form a unified model; secondly, there is no unified model for the comparison between the artificial intelligence systems and the human at present. In response to this problem, the author's research team referred to the Von Neumann Architecture[5], David Wexler's human intelligence model[6], and DIKW model system in the field of knowledge management[7], and put forward a \"standard intelligent model\", which describes the characteristics and attributes of the artificial intelligence systems and the human uniformly, and takes an agent as a system with the abilities of knowledge acquisition, mastery, creation and feedback[8] (see Figure 1). Figure 1 Standard Intelligence Model Based on this model in combination with Von Neumann architecture, an extended Von Neumann architecture can be formed (see Figure 2). Compared to the Von Neumann architecture, this model is added with innovation and creation function that can discover new elements of knowledge and new laws based on the existing knowledge, and make them stored in the storage for use by computers and controllers, and achieve knowledge interaction with the outside through the input / output system. The second addition is an external knowledge database or cloud storage that enables knowledge sharing, whereas the Von Neumann architecture's external storage only serves the single system. A. Arithmetic logic unit D. innovation generator B. Control unitE. input device C. Internal memory unit F. output device Figure 2 Expanded Von Neumann Architecture 2. Definitions of Three IQs of Intelligence System 2.1 Proposal of AI General IQ (AI G IQ) Based on the standard intelligent model, the research team established the AI ​ ​ IQ Test Scale and used it to conduct AI IQ tests on more than 50 artificial intelligence systems including Google, Siri, Baidu, Bing and human groups at the age of 6, 12, and 18 respectively in 2014 and 2016. From the test results, the performance of artificial intelligence systems such as Google and Baidu has been greatly increased from two years ago, but still lags behind the human group at the age of 6[9] (see Table1 and Table 2). Table 1. Ranking of top 13 artificial intelligence IQs for 2014.", "title": "" }, { "docid": "72c054c955a34fbac8e798665ece8f57", "text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.", "title": "" }, { "docid": "df9cb0c1ae20afdb14aa94c45170b439", "text": "•We present a case of multiple mucinous metaplasia and neoplasia of cervix, endometrium, fallopian tube, ovary, and mesenterium with external urethral meatus neoplasm.•Immunohistochemistry showed almost same pattern in each neoplasms.•PCR-direct sequencing showed no existence of both KRAS and GNAS mutations.•This report suggests a possibility of synchronous mucinous metaplasia and neoplasia \"beyond\" female genital tract.", "title": "" }, { "docid": "8482429f70e50b514960fca81db25ff7", "text": "Stem cells capable of differentiating to multiple lineages may be valuable for therapy. We report the isolation of human and rodent amniotic fluid–derived stem (AFS) cells that express embryonic and adult stem cell markers. Undifferentiated AFS cells expand extensively without feeders, double in 36 h and are not tumorigenic. Lines maintained for over 250 population doublings retained long telomeres and a normal karyotype. AFS cells are broadly multipotent. Clonal human lines verified by retroviral marking were induced to differentiate into cell types representing each embryonic germ layer, including cells of adipogenic, osteogenic, myogenic, endothelial, neuronal and hepatic lineages. Examples of differentiated cells derived from human AFS cells and displaying specialized functions include neuronal lineage cells secreting the neurotransmitter L-glutamate or expressing G-protein-gated inwardly rectifying potassium channels, hepatic lineage cells producing urea, and osteogenic lineage cells forming tissue-engineered bone.", "title": "" }, { "docid": "3798374ed33c3d3255dcc7d7c78507c2", "text": "Cloud computing is characterized by shared infrastructure and a decoupling between its operators and tenants. These two characteristics impose new challenges to databases applications hosted in the cloud, namely: (i) how to price database services, (ii) how to isolate database tenants, and (iii) how to optimize database performance on this shared infrastructure. We argue that today’s solutions, based on virtual-machines, do not properly address these challenges. We hint at new research directions to tackle these problems and argue that these three challenges share a common need for accurate predictive models of performance and resource utilization. We present initial predictive models for the important class of OLTP/Web workloads and show how they can be used to address these challenges.", "title": "" }, { "docid": "04bce58aad0500da7c14afd65028dfbb", "text": "Personalization in e-commerce has potentials to increase sales, customers' purchase intention and acquisition, as well as improvement of customer interaction. It is understood that personalization is a controllable variable for successful e-commerce. However, previous research on personalization proposed diverse concepts from numerous fields. As a result, it leads to bias construct of e-commerce personalization development and evaluation by academia and industry. To address this gap, a study was conducted to unravel personalization features from various perspectives. A Kitchenham's systematic literature review was used to discover personalization research from Q1/Q2 journals and top conference papers between 2012-2017. A theory-driven approach was administered to extract 21 selected papers. This process classifies personalization features into four dimensions based on three characters i.e objective, method, and user model. They include architectural, relational, instrumental and commercial dimensions. The results show that instrumental and commercial personalizations have been proved as the most popular dimension in the academic literature. However, relational personalization has been consistently rising as a new interesting topic to study since the massive growth of social media data.", "title": "" }, { "docid": "91a73d0e3e5d7a60b28357bc47868b87", "text": "Modeling, understanding, and predicting the spatio-temporal dynamics of online memes are important tasks, with ramifications on location-based services, social media search, targeted advertising and content delivery networks. However, the raw data revealing these dynamics are often incomplete and error-prone; for example, API limitations and data sampling policies can lead to an incomplete (and often biased) perspective on these dynamics. Hence, in this paper, we investigate new methods for uncovering the full (underlying) distribution through a novel spatio-temporal dynamics recovery framework which models the latent relationships among locations, memes, and times. By integrating these hidden relationships into a tensor-based recovery framework -- called AirCP -- we find that high-quality models of meme spread can be built with access to only a fraction of the full data. Experimental results on both synthetic and real-world Twitter hashtag data demonstrate the promising performance of the proposed framework: an average improvement of over 27% in recovering the spatio-temporal dynamics of hashtags versus five state-of-the-art alternatives.", "title": "" }, { "docid": "bdfb48fcd7ef03d913a41ca8392552b6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "6318c9d0e62f1608c105b114c6395e6f", "text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.", "title": "" }, { "docid": "de9767297368dffbdbae4073338bdb15", "text": "An increasing number of applications rely on 3D geoinformation. In addition to 3D geometry, these applications particularly require complex semantic information. In the context of spatial data infrastructures the needed data are drawn from distributed sources and often are thematically and spatially fragmented. Straight forward joining of 3D objects would inevitably lead to geometrical inconsistencies such as cracks, permeations, or other inconsistencies. Semantic information can help to reduce the ambiguities for geometric integration, if it is coherently structured with respect to geometry. The paper discusses these problems with special focus on virtual 3D city models and the semantic data model CityGML, an emerging standard for the representation and the exchange of 3D city models based on ISO 191xx standards and GML3. Different data qualities are analyzed with respect to their semantic and spatial structure leading to the distinction of six categories regarding the spatio-semantic coherence of 3D city models. Furthermore, it is shown how spatial data with complex object descriptions support the integration process. The derived categories will help in the future development of automatic integration methods for complex 3D geodata.", "title": "" }, { "docid": "d114be3bb594bb05709ecd0560c36817", "text": "The term \"papilledema\" describes optic disc swelling resulting from increased intracranial pressure. A complete history and direct funduscopic examination of the optic nerve head and adjacent vessels are necessary to differentiate papilledema from optic disc swelling due to other conditions. Signs of optic disc swelling include elevation and blurring of the disc and its margins, venous congestion, and retinal hard exudates, splinter hemorrhages and infarcts. Patients with papilledema usually present with signs or symptoms of elevated intracranial pressure, such as headache, nausea, vomiting, diplopia, ataxia or altered consciousness. Causes of papilledema include intracranial tumors, idiopathic intracranial hypertension (pseudotumor cerebri), subarachnoid hemorrhage, subdural hematoma and intracranial inflammation. Optic disc edema may also occur from many conditions other than papilledema, including central retinal artery or vein occlusion, congenital structural anomalies and optic neuritis.", "title": "" }, { "docid": "c8a9919a2df2cfd730816cd0171f08dd", "text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.", "title": "" }, { "docid": "d3087ea8bea3516606b8fc5e61888658", "text": "This paper presents a novel topology for the generation of adjustable frequency and magnitude pulsewidth-modulated (PWM) three-phase ac from a balanced three-phase ac source with a high-frequency ac link. The proposed single-stage power electronic transformer (PET) with bidirectional power flow capability may find application in compact isolated PWM ac drives. This topology along with the proposed control has the following advantages: 1) input power factor correction; 2) common-mode voltage suppression at the load end; 3) high-quality output voltage waveform (comparable with conventional space vector PWM); and 4) minimization of output voltage loss, common-mode voltage switching, and distortion of the load current waveform due to leakage inductance commutation. A source-based commutation of currents associated with energy in leakage inductance (termed as leakage energy) has been proposed. This results in soft-switching of the output-side converter and recovery of the leakage energy. The entire topology along with the proposed control scheme has been analyzed. The simulation and experimental results verify the analysis and advantages of the proposed PET.", "title": "" }, { "docid": "5c97711d149d6744e3ea6d070016cd39", "text": "This paper presents a clock generator for a MIPI M-PHY serial link transmitter, which includes an ADPLL, a digitally controlled oscillator (DCO), a programmable multiplier, and the actual serial driver. The paper focuses on the design of a DCO and how to enhance the frequency resolution to diminish the quantization noise introduced by the frequency discretization. As a result, a 17-kHz DCO frequency tuning resolution is demonstrated. Furthermore, implementation details of a low-power programmable 1-to-2-or-4 frequency multiplier are elaborated. The design has been implemented in a 40-nm CMOS process. The measurement results verify that the circuit provides the MIPI clock data rates from 1.248 GHz to 5.83 GHz. The DCO and multiplier unit dissipates a maximum of 3.9 mW from a 1.1 V supply and covers a small die area of 0.012 mm2.", "title": "" }, { "docid": "4457c0b480ec9f3d503aa89c6bbf03b9", "text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.", "title": "" }, { "docid": "3b820fff1efefd0cae4239bee76e142c", "text": "Characters form the focus of various studies of literary works, including social network analysis, archetype induction, and plot comparison. The recent rise in the computational modelling of literary works has produced a proportional rise in the demand for character-annotated literary corpora. However, automatically identifying characters is an open problem and there is low availability of literary texts with manually labelled characters. To address the latter problem, this work presents three contributions: (1) a comprehensive scheme for manually resolving mentions to characters in texts. (2) A novel collaborative annotation tool, CHARLES (CHAracter Resolution Label-Entry System) for character annotation and similiar cross-document tagging tasks. (3) The character annotations resulting from a pilot study on the novel Pride and Prejudice, demonstrating the scheme and tool facilitate the efficient production of high-quality annotations. We expect this work to motivate the further production of annotated literary corpora to help meet the demand of the community.", "title": "" } ]
scidocsrr
a7d55b307f91869b60d9dbf6680e5f45
Speech Recognition using Hidden Markov Model
[ { "docid": "a52d2a2c8fdff0bef64edc1a97b89c63", "text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.", "title": "" } ]
[ { "docid": "2ee6e88d8a18cedc1745e1512ed3a837", "text": "The cardiorespiratory signal is a fundamental vital sign to assess a person's health. Additionally, the cardio-respiratory signal gives a great deal of information to healthcare providers wishing to monitor healthy individuals. This paper proposes a method to detect the respiratory waveform from an accelerometer strapped onto the chest. A system was designed and several experiments were conducted on volunteers. The acquisition is performed in different status: normal, apnea, deep breathing and also in different postures: vertical (sitting, standing) or horizontal (lying down). This method could therefore be suitable for automatic identification of some respiratory malfunction, for example during the obstructive apnea.", "title": "" }, { "docid": "5a5c71b56cf4aa6edff8ecc57298a337", "text": "The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of E(Exp) is its ability to emulate the behavior of other error functions by the sole adjustment of a real-valued parameter. In other words, E(Exp) is a sort of generalized error function embodying complementary features of other functions. The experimental results show that the flexibility of the new, generalized, error function allows one to obtain the best results achievable with the other functions with a performance improvement in some cases.", "title": "" }, { "docid": "077cbfac4c207e0763e2f1ae3f0073cf", "text": "Visual Question Answering (VQA) is a wellknown and challenging task that requires systems to jointly reason about natural language and vision. Deep learning models in various forms have been the standard for solving VQA. However, some of these VQA models are better at certain types of image-question pairs than other models. Ensembling VQA models intelligently to leverage their diverse expertise is, therefore, advantageous. Stacking With Auxiliary Features (SWAF) is an intelligent ensembling technique which learns to combine the results of multiple models using features of the current problem as context. We propose four categories of auxiliary features for ensembling for VQA. Three out of the four categories of features can be inferred from an image-question pair and do not require querying the component models. The fourth category of auxiliary features uses model-specific explanations. In this paper, we describe how we use these various categories of auxiliary features to improve performance for VQA. Using SWAF to effectively ensemble three recent systems, we obtain a new state-of-the-art. Our work also highlights the advantages of explainable AI models.", "title": "" }, { "docid": "c41c56eeb56975c4d65e3847aa6b8b01", "text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency", "title": "" }, { "docid": "795bdbc3dea0ade425c5af251e09a607", "text": "Entity disambiguation with Wikipedia relies on structured information from redirect pages, article text, inter-article links, and categories. We explore whether web links can replace a curated encyclopaedia, obtaining entity prior, name, context, and coherence models from a corpus of web pages with links to Wikipedia. Experiments compare web link models to Wikipedia models on well-known conll and tac data sets. Results show that using 34 million web links approaches Wikipedia performance. Combining web link and Wikipedia models produces the best-known disambiguation accuracy of 88.7 on standard newswire test data.", "title": "" }, { "docid": "5e07384e70a5f2a3cc4d0129542da8a9", "text": "A low profile differential-fed dual-polarized microstrip patch antenna (MPA) with bandwidth enhancement is proposed under radiation of the first and second odd-order resonant modes. First, all of even-order modes are fully suppressed by using a differentially feeding scheme instead of the single probe feed. Next, the radiation pattern of a square MPA is theoretically analyzed. It is demonstrated that the traditional monopole-like radiation of the second odd-order mode in the H-plane, i.e., TM21 mode, can be transformed into the broadside radiation by etching out a narrow slot at the center of the radiating patch. After that, an array of shorting pins is symmetrically embedded underneath the radiating patch so as to progressively push up the resonant frequency of the TM01 mode (or TM10 mode), while almost maintaining that of TM21 mode (or TM12 mode) to be unchanged. With these arrangements, a wide impedance bandwidth with stable radiation peak in the broadside direction is achieved for the MPA under this dual modes operation. Finally, the dual-polarized MPA is fabricated and measured. The measured results are found in good agreement with the simulated ones in terms of the reflection coefficient, radiation pattern, and realized gain, demonstrating that the MPA’s impedance bandwidth ( $\\vert S_{\\mathrm { {dd11}}}\\vert <-10$ dB) is tremendously increased up to about 8% with a high differential port-to-port isolation of better than 22.6 dB. In particular, a low profile property of about 0.024 free-space wavelength and the stable radiation pattern are also achieved.", "title": "" }, { "docid": "e26c8fde7d79298ea0dba161bf24f2da", "text": "We present a new exact subdivision algorithm CEVAL for isolating the complex roots of a square-free polynomial in any given box. It is a generalization of a previous real root isolation algorithm called EVAL. Under suitable conditions, our approach is applicable for general analytic functions. CEVAL is based on the simple Bolzano Principle and is easy to implement exactly. Preliminary experiments have shown its competitiveness.\n We further show that, for the \"benchmark problem\" of isolating all roots of a square-free polynomial with integer coefficients, the asymptotic complexity of both algorithms EVAL and CEVAL matches (up a logarithmic term) that of more sophisticated real root isolation methods which are based on Descartes' Rule of Signs, Continued Fraction or Sturm sequence. In particular, we show that the tree size of EVAL matches that of other algorithms. Our analysis is based on a novel technique called Δ-clusters from which we expect to see further applications.", "title": "" }, { "docid": "7fbb593d2a1ad935cab676503849044b", "text": "The aim of this paper is to give an overview on 50 years of research in electromyography in the four competitive swimming strokes (crawl, breaststroke, butterfly, and backstroke). A systematic search of the existing literature was conducted using the combined keywords \"swimming\" and \"EMG\" on studies published before August 2013, in the electronic databases PubMed, ISI Web of Knowledge, SPORT discus, Academic Search Elite, Embase, CINAHL and Cochrane Library. The quality of each publication was assessed by two independent reviewers using a custom made checklist. Frequency of topics, muscles studied, swimming activities, populations, types of equipment and data treatment were determined from all selected papers and, when possible, results were compared and contrasted. In the first 20 years of EMG studies in swimming, most papers were published as congress proceedings. The methodological quality was low. Crawl stroke was most often studied. There was no standardized manner of defining swimming phases, normalizing the data or of presenting the results. Furthermore, the variability around the mean muscle activation patterns is large which makes it difficult to define a single pattern applicable to all swimmers in any activity examined.", "title": "" }, { "docid": "0080aa23209d70192bb13b9451082803", "text": "This paper studies the problem of secret-message transmission over a wiretap channel with correlated sources in the presence of an eavesdropper who has no source observation. A coding scheme is proposed based on a careful combination of 1) Wyner-Ziv's source coding to generate secret key from correlated sources based on a certain cost on the channel, 2) one-time pad to secure messages without additional cost, and 3) Wyner's secrecy coding to achieve secrecy based on the advantage of legitimate receiver's channel over the eavesdropper's. The work sheds light on optimal strategies for practical code design for secure communication/storage systems.", "title": "" }, { "docid": "ed13193df5db458d0673ccee69700bc0", "text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).", "title": "" }, { "docid": "2ad6b17fcb0ea20283e318a3fed2939f", "text": "A fundamental problem of time series is k nearest neighbor (k-NN) query processing. However, existing methods are not fast enough for large dataset. In this paper, we propose a novel approach, STS3, to process k-NN queries by transforming time series to sets and measure the similarity under Jaccard metric. Our approach is more accurate than Dynamic Time Warping(DTW) in our suitable scenarios and it is faster than most of the existing methods, due to the efficient similarity search for sets. Besides, we also developed an index, a pruning and an approximation technique to improve the k-NN query procedure. As shown in the experimental results, all of them could accelerate the query processing effectively.", "title": "" }, { "docid": "5d721d52aa72607b2638c01381369a8d", "text": "In this work, we present, LieNet, a novel deep learning framework that simultaneously detects, segments multiple object instances, and estimates their 6D poses from a single RGB image without requiring additional post-processing. Our system is accurate and fast (∼10 fps), which is well suited for real-time applications. In particular, LieNet detects and segments object instances in the image analogous to modern instance segmentation networks such as Mask R-CNN, but contains a novel additional sub-network for 6D pose estimation. LieNet estimates the rotation matrix of an object by regressing a Lie algebra based rotation representation, and estimates the translation vector by predicting the distance of the object to the camera center. The experiments on two standard pose benchmarking datasets show that LieNet greatly outperforms other recent CNN based pose prediction methods when they are used with monocular images and without post-refinements.", "title": "" }, { "docid": "8f3f3d30d5f949da2a56a29881939924", "text": "The paper reports work to create believable autonomous Non Player Characters in Video games in general and educational role play games in particular. It aims to increase their ability to respond appropriately to the player’s actions both cognitively and emotionally by integrating two models: the cognitive appraisal-based FAtiMA architecture, and the drives-based PSI model. We discuss the modelling of adaptive affective autonomous characters based on a biologically-inspired theory of human action regulation taking into account perception, motivation, emotions, memory, learning and planning. These agents populate an educational Role Playing Game, ORIENT (Overcoming Refugee Integration with Empathic Novel Technology) dealing with the cultural-awareness problem for children aged 13–14.", "title": "" }, { "docid": "c47783fb0004de1ade74354ccd7498a0", "text": "We consider a fundamentally new approach to role and policy mining: finding RBAC models which reflect the observed usage of entitlements and the attributes of users. Such policies are interpretable, i.e., there is a natural explanation of why a role is assigned to a user and are conservative from a security standpoint since they are based on actual usage. Further, such \"generative\" models provide many other benefits including reconciliation with policies based on entitlements, detection of provisioning errors, as well as the detection of anomalous behavior. Our contributions include defining the fundamental problem as extensions of the well-known role mining problem, as well as providing several new algorithms based on generative machine learning models. Our algorithms find models which are causally associated with actual usage of entitlements and any arbitrary combination of user attributes when such information is available. This is the most natural process to provision roles, thus addressing a key usability issue with existing role mining algorithms.\n We have evaluated our approach on a large number of real life data sets, and our algorithms produce good role decompositions as measured by metrics such as coverage, stability, and generality We compare our algorithms with traditional role mining algorithms by equating usage with entitlement. Results show that our algorithms improve on existing approaches including exact mining, approximate mining, and probabilistic algorithms; the results are more temporally stable than exact mining approaches, and are faster than probabilistic algorithms while removing artificial constraints such as the number of roles assigned to each user. Most importantly, we believe that these roles more accurately capture what users actually do, the essence of a role, which is not captured by traditional methods.", "title": "" }, { "docid": "44cc7de51b68b1dcc769f6f020168ca5", "text": ". Pairs of similar examples classified dissimilarly can be cancelled out by pairs classified dissimilarly in the opposite direction, least stringent fairness requirement I Hybrid Fairness: cancellation only among cross-pairs within “buckets” – interpolates between individual and group fairness I Fairness loss minimized by constant predictors, but this incurs bad accuracy loss . How to trade off accuracy and fairness losses?", "title": "" }, { "docid": "42167e7708bb73b08972e15a44a6df02", "text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.", "title": "" }, { "docid": "93a57b4e886aefce412e41df846c4bc5", "text": "This paper proposes a novel approach to understand customer relationships among businesses and the type of information that can be inferred from these relationships. Our approach is grounded in a unique method of constructing a mutual customer business graph, where businesses are represented by nodes and the weight of the edge connecting two businesses reflects the strength of their mutual customer population, which is estimated based on the reviews from the Yelp academic data set. We construct and analyze these mutual customer business graphs for cities of Las Vegas and Phoenix using centrality and spectral analysis techniques. Centrality analysis computes unweighted and weighted versions of degree and PageRank graph measures; the results reveal that businesses with high graph centralities also tend to be geographically central relative to other businesses. Spectral clustering partitions the graph to group businesses that are frequented by the same set of customers. An analysis of the frequency distribution of words from the reviews within each cluster suggests that businesses aggregate around a theme. Taken together, these findings suggest that customers prefer to visit businesses that are geographically proximate and/or offer similar products and services. We discuss how businesses could strategically position themselves by considering the impact of these two factors in attracting clientele.", "title": "" }, { "docid": "ece408df916581aa838f7991945d3586", "text": "It is well-documented that most students do not have adequate proficiencies in inquiry and metacognition, particularly at deeper levels of comprehension that require explanatory reasoning. The proficiencies are not routinely provided by teachers and normal tutors so it is worthwhile to turn to computer-based learning environments. This article describes some of our recent computer systems that were designed to facilitate explanation-centered learning through strategies of inquiry and metacognition while students learn science and technology content. Point&Query augments hypertext, hypermedia, and other learning environments with question–answer facilities that are under the learner control. AutoTutor and iSTART use animated conversational agents to scaffold strategies of inquiry, metacognition, and explanation construction. AutoTutor coaches students in generating answers to questions that require explanations (e.g., why, what-if, how) by holding a mixed-initiative dialogue in natural language. iSTART models and coaches students in constructing self-explanations and in applying other metacomprehension strategies while reading text. These systems have shown promising results in tests of learning gains and learning strategies.", "title": "" }, { "docid": "91dab8f670d48eb3f55fcce60a932711", "text": "Monitoring the future health status of patients from the historical Electronic Health Record (EHR) is a core research topic in predictive healthcare. The most important challenges are to model the temporality of sequential EHR data and to interpret the prediction results. In order to reduce the future risk of diseases, we propose a multi-task framework that can monitor the multiple status of diagnoses. Patients’ historical records are directly fed into a Recurrent Neural Network (RNN) which memorizes all the past visit information, and then a task-specific layer is trained to predict multiple diagnoses. Moreover, three attention mechanisms for RNNs are introduced to measure the relationships between past visits and current status. Experimental results show that the proposed attention-based RNNs can significantly improve the prediction accuracy compared to widely used approaches. With the attention mechanisms, the proposed framework is able to identify the visit information which is important to the final prediction.", "title": "" }, { "docid": "904f74117506c0c94e93c3f426537918", "text": "Many automation and monitoring systems in agriculture do not have a calculation system for watering based on weather. Of these issues, will be discussed weather prediction system using fuzzy logic algorithm for supporting General Farming Automation. The weather calculation system works by taking a weather prediction data from the Weather Service Provider (WSP). Furthermore, it also retrieves soil moisture sensor value and rainfall sensor value. After that, the system will calculate using fuzzy logic algorithm whether the plant should be watered or not. The weather calculation system will help the performance of the General Farming Automation Control System in order to work automatically. So, the plants still obtain water and nutrients intake are not excessive.", "title": "" } ]
scidocsrr
dc15ab382282cd7e65b8e32f2850818e
Multi-Cast Attention Networks
[ { "docid": "9387c02974103731846062b549022819", "text": "Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features.", "title": "" }, { "docid": "1718c817d15b9bc1ab99d359ff8d1157", "text": "Semantic matching, which aims to determine the matching degree between two texts, is a fundamental problem for many NLP applications. Recently, deep learning approach has been applied to this problem and significant improvements have been achieved. In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i.e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position. Based on this idea, we propose a novel deep architecture, namely Match-SRNN, to model the recursive matching structure. Firstly, a tensor is constructed to capture the word level interactions. Then a spatial RNN is applied to integrate the local interactions recursively, with importance determined by four types of gates. Finally, the matching score is calculated based on the global interaction. We show that, after degenerated to the exact matching scenario, Match-SRNN can approximate the dynamic programming process of longest common subsequence. Thus, there exists a clear interpretation for Match-SRNN. Our experiments on two semantic matching tasks showed the effectiveness of Match-SRNN, and its ability of visualizing the learned matching structure.", "title": "" }, { "docid": "0201a5f0da2430ec392284938d4c8833", "text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.", "title": "" }, { "docid": "f8854602bbb2f5295a5fba82f22ca627", "text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.", "title": "" } ]
[ { "docid": "aa23a546d17572f6b79c72832d83308b", "text": "Leader opening and closing behaviors are assumed to foster high levels of employee exploration and exploitation behaviors, hence motivating employee innovative performance. Applying the ambidexterity theory of leadership for innovation, results revealed that leader opening and closing behaviors positively predicted employee exploration and exploitation behaviors, respectively, above and beyond the control variables. Moreover, results showed that employee innovative performance was significantly predicted by leader opening behavior, leader closing behavior, and the interaction between leaders’ opening and closing behaviors, above and beyond control variables.", "title": "" }, { "docid": "d763cefd5d584405e1a6c8e32c371c0c", "text": "Abstract: Whole world and administrators of Educational institutions’ in our country are concerned about regularity of student attendance. Student’s overall academic performance is affected by the student’s present in his institute. Mainly there are two conventional methods for attendance taking and they are by calling student nams or by taking student sign on paper. They both were more time consuming and inefficient. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance of presence. The paper reviews various computerized attendance management system. In this paper basic problem of student attendance management is defined which is traditionally taken manually by faculty. One alternative to make student attendance system automatic is provided by Computer Vision. In this paper we review the various computerized system which is being developed by using different techniques. Based on this review a new approach for student attendance recording and management is proposed to be used for various colleges or academic institutes.", "title": "" }, { "docid": "ef57140e433ad175a3fae38236effa69", "text": "For a real driver assistance system, the weather, driving speed, and background could affect the accuracy of obstacle detection. In the past, only a few studies covered all the different weather conditions and almost none of them had paid attention to the safety at vehicle lateral blind spot area. So, this paper proposes a hybrid scheme for pedestrian and vehicle detection, and develop a warning system dedicated for lateral blind spot area under different weather conditions and driving speeds. More specifically, the HOG and SVM methods are used for pedestrian detection. The image subtraction, edge detection and tire detection are applied for vehicle detection. Experimental results also show that the proposed system can efficiently detect pedestrian and vehicle under several scenarios.", "title": "" }, { "docid": "7077a80ec214dd78ebc7aeedd621d014", "text": "Malicious URL, a.k.a. malicious website, is a common and serious threat to cybersecurity. Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting users to become victims of scams (monetary loss, theft of private information, and malware installation), and cause losses of billions of dollars every year. It is imperative to detect and act on such threats in a timely manner. Traditionally, this detection is done mostly through the usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have been explored with increasing attention in recent years. This article aims to provide a comprehensive survey and a structural understanding of Malicious URL Detection techniques using machine learning. We present the formal formulation of Malicious URL Detection as a machine learning task, and categorize and review the contributions of literature studies that addresses different dimensions of this problem (feature representation, algorithm design, etc.). Further, this article provides a timely and comprehensive survey for a range of different audiences, not only for machine learning researchers and engineers in academia, but also for professionals and practitioners in cybersecurity industry, to help them understand the state of the art and facilitate their own research and practical applications. We also discuss practical issues in system design, open research challenges, and point out some important directions for future research.", "title": "" }, { "docid": "37484cdfa29c7021c07f307c695c0a77", "text": "Deep neural networks have shown promising results for various clinical prediction tasks such as diagnosis, mortality prediction, predicting duration of stay in hospital, etc. However, training deep networks – such as those based on Recurrent Neural Networks (RNNs) – requires large labeled data, high computational resources, and significant hyperparameter tuning effort. In this work, we investigate as to what extent can transfer learning address these issues when using deep RNNs to model multivariate clinical time series. We consider transferring the knowledge captured in an RNN trained on several source tasks simultaneously using a large labeled dataset to build the model for a target task with limited labeled data. An RNN pre-trained on several tasks provides generic features, which are then used to build simpler linear models for new target tasks without training task-specific RNNs. For evaluation, we train a deep RNN to identify several patient phenotypes on time series from MIMIC-III database, and then use the features extracted using that RNN to build classifiers for identifying previously unseen phenotypes, and also for a seemingly unrelated task of in-hospital mortality. We demonstrate that (i) models trained on features extracted using pre-trained RNN outperform or, in the worst case, perform as well as task-specific RNNs; (ii) the models using features from pre-trained models are more robust to the size of labeled data than task-specific RNNs; and (iii) features extracted using pre-trained RNN are generic enough and perform better than typical statistical hand-crafted features.", "title": "" }, { "docid": "53fd0a42156a08d913718c8cfff748ef", "text": "Nowadays scientists receive increasingly large volumes of data daily. These volumes and accompanying metadata that describes them are collected in scientific file repositories. Today's scientists need a data management tool that makes these file repositories accessible and performs a number of exploration steps near-instantly. Current database technology, however, has a long data-to-insight time, and does not provide enough interactivity to shorten the exploration time. We envision that exploiting metadata helps solving these problems. To this end, we propose a novel query execution paradigm, in which we decompose the query execution into two stages. During the first stage, we process only metadata, whereas the rest of the data is processed during the second stage. So that, we can exploit metadata to boost interactivity and to ingest only required data per query transparently. Preliminary experiments show that up-front ingestion time is reduced by orders of magnitude, while query performance remains similar. Motivated by these results, we identify the challenges on the way from the new paradigm to efficient interactive data exploration.", "title": "" }, { "docid": "fa7b90dfb8d10bf9942561f13b4d8084", "text": "We present a wearable textile sensor system for monitoring muscle activity, leveraging surface pressure changes between the skin and an elastic sport support band. The sensor is based on an 8×16 element fabric resistive pressure sensing matrix of 1cm spatial resolution, which can be read out with 50fps refresh rate. We evaluate the system by monitoring leg muscles during leg workouts in a gym out of the lab. The sensor covers the lower part of quadriceps of the user. The shape and movement of the two major muscles (vastus lateralis and medialis) are visible from the data during various exercises. The system registers the activity of the user for every second, including which machine he/she is using, walking, relaxing and adjusting the machines; it also counts the repetitions from each set and evaluate the force consistency which is related to the workout quality. 6 people participated in the experiment of overall 24 leg workout sessions. Each session includes cross-trainer warm-up and cool-down, 3 different leg machines, 4 sets on each machine. Plus relaxing, adjusting machines, and walking, we perform activity recognition and quality evaluation through 2-dimensional mapping and the time sequence of the average force. We have reached 81.7% average recognition accuracy on a 2s sliding window basis, 93.3% on an event basis, and 85.6% spotting F1-score. We further demonstrate how to evaluate the workout quality through counting, force pattern variation and consistency.", "title": "" }, { "docid": "d903abd37f9ef4c6431051975b4d561b", "text": "The recent development of uncertainty theories that account for the notion of belief is linked to the emergence, in the XXth century, of Decision Theory and Artificial Intelligence. Nevertheless, this topic was dealt with very differently by each area. Decision Theory insisted on the necessity to found representations on the empirical observation of individuals choosing between courses of action, regardless of any other type of information. Any axiom in the theory should be liable of empirical validation. Probabilistic representations of uncertainty can then be justified with a subjectivist point of view, without necessary reference to frequency. Degrees of probability then evaluate to what extent an agent believes in the occurrence of an event or in the truth of a proposition. In contrast, Artificial Intelligence adopted a more introspective approach aiming at formalizing intuitions, reasoning processes, through the statement of reasonable axioms, often without reference to probability. Actually, until the nineties Artificial Intelligence essentially focused on purely qualitative and ordinal (in fact, logical) representations.", "title": "" }, { "docid": "d7c27413eb3f379618d1aafd85a43d3f", "text": "This paper presents a tool Altair that automatically generates API function cross-references, which emphasizes reliable structural measures and does not depend on specific client code. Altair ranks related API functions for a given query according to pair-wise overlap, i.e., how they share state, and clusters tightly related ones into meaningful modules.\n Experiments against several popular C software packages show that Altair recommends related API functions for a given query with remarkably more precise and complete results than previous tools, that it can extract modules from moderate-sized software (e.g., Apache with 1000+ functions) at high precision and recall rates (e.g., both exceeding 70% for two modules in Apache), and that the computation can finish within a few seconds.", "title": "" }, { "docid": "51f400ce30094e1b2fe1c235ea2af55d", "text": "Machine learning is a quickly evolving field which now looks really different from what it was 15 years ago, when classification and clustering were major issues. This document proposes several trends to explore the new questions of modern machine learning, with the strong afterthought that the belief function framework has a major role to play.", "title": "" }, { "docid": "d5870092a3e8401654b5b9948c77cb0a", "text": "Recent research shows that there has been increased interest in investigating the role of mood and emotions in the HCI domain. Our moods, however, are complex. They are affected by many dynamic factors and can change multiple times throughout each day. Furthermore, our mood can have significant implications in terms of our experiences, our actions and most importantly on our interactions with other people. We have developed MobiMood, a proof-of-concept social mobile application that enables groups of friends to share their moods with each other. In this paper, we present the results of an exploratory field study of MobiMood, focusing on explicit mood sharing in-situ. Our results highlight that certain contextual factors had an effect on mood and the interpretation of moods. Furthermore, mood sharing and mood awareness appear to be good springboards for conversations and increased communication among users. These and other findings lead to a number of key implications in the design of mobile social awareness applications.", "title": "" }, { "docid": "8a77ab964896d3fea327e76b2efad8ef", "text": "We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then we examine the meaning of the p-value in detail, and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues we have encountered in psycholinguistics and linguistics: running experiments until significance is reached, and the “garden-of-forking-paths” problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results.", "title": "" }, { "docid": "db9f63a30b04a1815e156eba2e8ee3bb", "text": "Data holders can produce synthetic versions of datasets when concerns about potential disclosure restrict the availability of the original records. This paper is concerned with methods to judge whether such synthetic data have a distribution that is comparable to that of the original data, what we will term general utility. We consider how general utility compares with specific utility, the similarity of results of analyses from the synthetic data and the original data. We adapt a previous general measure of data utility, the propensity score mean-squared-error (pMSE), to the specific case of synthetic data and derive its distribution for the case when the correct synthesis model is used to create the synthetic data. Our asymptotic results are confirmed by a simulation study. We also consider two specific utility measures, confidence interval overlap and standardized difference in summary statistics, which we compare with the general utility results. We present two examples examining this comparison of general and specific utility to real data syntheses and make recommendations for their use for evaluating synthetic data.", "title": "" }, { "docid": "eb8bdb2a401f2a1233118e53430ac6c1", "text": "The two main research branches in intelligent vehicles field are Advanced Driver Assistance Systems (ADAS) [1] and autonomous driving [2]. ADAS generally work on predefined enviroment and limited scenarios such as highway driving, low speed driving, night driving etc. In such situations this systems have sufficiently high performance and the main features that allow their large diffusion and that have enabled commercialization in this years are the low cost, the small size and the easy integration into the vehicle. Autonomous vehicle, on the other hand, should be ready to work over all-scenarios, all-terrain and all-wheather conditions, but nowadays autonomous vehicle are used in protected and structured enviroments or military applications [3], [4]. Generally many differences between ADAS and autonomous vehicles, both hardware and software features, are related on cost and integration: ADAS are embedded into vehicles and might be low cost; on the other hand usually are not heavy limitations on cost and integration related to autonomous vehicles. Obviosly, the main difference is the presence/absence of the driver. Otherwise, most of the undelying ideas are shared, such as perception, planning, actuation needed in this kind of systems.", "title": "" }, { "docid": "1025324cac6dd4754109b12fd89f2715", "text": "A Correct diagnosis of tuberculosis (TB) can be only stated by applying a medical test to patient’s phlegm. The result of this test is obtained after a time period of about 45 days. The purpose of this study is to develop a data mining(DM) solution which makes diagnosis of tuberculosis as accurate as possible and helps deciding if it is reasonable to start tuberculosis treatment on suspected patients without waiting the exact medical test results or not. In this research, we proposed the use of Sugeno-type “adaptive-network-based fuzzy inference system” (ANFIS) to predict the existence of mycobacterium tuberculosis. 667 different patient records which are obtained from a clinic are used in the entire process of this research. Each of the patient records consist of 30 separate input parameters. ANFIS model is generated by using 500 of those records. We also implemented a multilayer perceptron and PART model using the same data set. The ANFIS model classifies the instances with an RMSE of 18% whereas Multilayer Perceptron does the same classification with an RMSE of % 19 and PART algorithm with an RMSE of % 20. ANFIS is an accurate and reliable method when compared with Multilayer Perceptron and PART algorithms for classification of tuberculosis patients. This study has contribution on forecasting patients before the medical tests.", "title": "" }, { "docid": "ba27fff04cd942ae5e1126ed6c18cd61", "text": "OBJECTIVE\nTo determine, using cone-beam computed tomography (CBCT), the residual ridge height (RRH), sinus floor membrane thickness (MT), and ostium patency (OP) in patients being evaluated for implant placement in the posterior maxilla.\n\n\nMATERIALS AND METHODS\nCBCT scans of 128 patients (199 sinuses) with ≥1 missing teeth in the posterior maxilla were examined. RRH and MT corresponding to each edentulous site were measured. MT >2 mm was considered pathological and categorized by degree of thickening (2-5, 5-10 mm, and >10 mm). Mucosal appearance was classified as \"normal\", \"flat thickening\", or \"polypoid thickening\", and OP was classified as \"patent\" or \"obstructed\". Descriptive and bivariate statistical analyses were performed.\n\n\nRESULTS\nMT >2 mm was observed in 60.6% patients and 53.6% sinuses. Flat and polypoid mucosal thickening had a prevalence of 38.1% and 15.5%, respectively. RRH ≤4 mm was observed in 46.9% and 48.9% of edentulous first and second molar sites, respectively. Ostium obstruction was observed in 13.1% sinuses and was associated with MT of 2-5 mm (6.7%), 5-10 mm (24%), and >10 mm (35.3%, P < 0.001). Polypoid mucosal lesions were more frequently associated with ostium obstruction than flat thickenings (26.7% vs. 17.6%, P < 0.001).\n\n\nCONCLUSION\nThickened sinus membranes (>2 mm) and reduced residual ridge heights (≤4 mm) were highly prevalent in this sample of patients with missing posterior maxillary teeth. Membrane thickening >5 mm, especially of a polypoid type, is associated with an increased risk for ostium obstruction. In the presence of these findings, an ENT referral may be beneficial prior to implant-related sinus floor elevation.", "title": "" }, { "docid": "c638a99b471a97d690c1867408d0af7b", "text": "The well known SIR models have been around for many years. Under some suitable assumptions, the models provide information about when does the epidemic occur and when it doesn’t. The models can incorporate the birth, death, and immunization and analyze the outcome mathematically. In this project we studied several SIR models including birth, death and immunization. We also studied the bifurcation analysis associated with the disease free and epidemic equilibrium.", "title": "" }, { "docid": "6ed624fa056d1f92cc8e58401ab3036e", "text": "In this paper, we present an approach to segment 3D point cloud data using ideas from persistent homology theory. The proposed algorithms first generate a simplicial complex representation of the point cloud dataset. Next, we compute the zeroth homology group of the complex which corresponds to the number of connected components. Finally, we extract the clusters of each connected component in the dataset. We show that this technique has several advantages over state of the art methods such as the ability to provide a stable segmentation of point cloud data under noisy or poor sampling conditions and its independence of a fixed distance metric.", "title": "" }, { "docid": "2e90d3cfbf6e1090bcdaae6d62ce2c45", "text": "During the intrauterine period a testosterone surge masculinizes the fetal brain, whereas the absence of such a surge results in a feminine brain. As sexual differentiation of the brain takes place at a much later stage in development than sexual differentiation of the genitals, these two processes can be influenced independently of each other. Sex differences in cognition, gender identity (an individual's perception of their own sexual identity), sexual orientation (heterosexuality, homosexuality or bisexuality), and the risks of developing neuropsychiatric disorders are programmed into our brain during early development. There is no evidence that one's postnatal social environment plays a crucial role in gender identity or sexual orientation. We discuss the relationships between structural and functional sex differences of various brain areas and the way they change along with any changes in the supply of sex hormones on the one hand and sex differences in behavior in health and disease on the other.", "title": "" }, { "docid": "f78779d6c2937560c68b7a3513c4730f", "text": "We report on the methods used in our recent DeepEnsembleCoco submission to the PASCAL VOC 2012 challenge, which achieves state-of-theart performance on the object detection task. Our method is a variant of the R-CNN model proposed by Girshick et al. [4] with two key improvements to training and evaluation. First, our method constructs an ensemble of deep CNN models with different architectures that are complementary to each other. Second, we augment the PASCAL VOC training set with images from the Microsoft COCO dataset to significantly enlarge the amount training data. Importantly, we select a subset of the Microsoft COCO images to be consistent with the PASCAL VOC task. Results on the PASCAL VOC evaluation server show that our proposed method outperform all previous methods on the PASCAL VOC 2012 detection task at time of submission.", "title": "" } ]
scidocsrr
c714aa5ee992fd0fa4944f768f86b11e
STRATEGIC PLANNING IN A TURBULENT ENVIRONMENT : EVIDENCE FROM THE OIL MAJORS
[ { "docid": "77e501546d95fa18cf2a459fae274875", "text": "Complex organizations exhibit surprising, nonlinear behavior. Although organization scientists have studied complex organizations for many years, a developing set of conceptual and computational tools makes possible new approaches to modeling nonlinear interactions within and between organizations. Complex adaptive system models represent a genuinely new way of simplifying the complex. They are characterized by four key elements: agents with schemata, self-organizing networks sustained by importing energy, coevolution to the edge of chaos, and system evolution based on recombination. New types of models that incorporate these elements will push organization science forward by merging empirical observation with computational agent-based simulation. Applying complex adaptive systems models to strategic management leads to an emphasis on building systems that can rapidly evolve effective adaptive solutions. Strategic direction of complex organizations consists of establishing and modifying environments within which effective, improvised, self-organized solutions can evolve. Managers influence strategic behavior by altering the fitness landscape for local agents and reconfiguring the organizational architecture within which agents adapt. (Complexity Theory; Organizational Evolution; Strategic Management) Since the open-systems view of organizations began to diffuse in the 1960s, comnplexity has been a central construct in the vocabulary of organization scientists. Open systems are open because they exchange resources with the environment, and they are systems because they consist of interconnected components that work together. In his classic discussion of hierarchy in 1962, Simon defined a complex system as one made up of a large number of parts that have many interactions (Simon 1996). Thompson (1967, p. 6) described a complex organization as a set of interdependent parts, which together make up a whole that is interdependent with some larger environment. Organization theory has treated complexity as a structural variable that characterizes both organizations and their environments. With respect to organizations, Daft (1992, p. 15) equates complexity with the number of activities or subsystems within the organization, noting that it can be measured along three dimensions. Vertical complexity is the number of levels in an organizational hierarchy, horizontal complexity is the number of job titles or departments across the organization, and spatial complexity is the number of geographical locations. With respect to environments, complexity is equated with the number of different items or elements that must be dealt with simultaneously by the organization (Scott 1992, p. 230). Organization design tries to match the complexity of an organization's structure with the complexity of its environment and technology (Galbraith 1982). The very first article ever published in Organization Science suggested that it is inappropriate for organization studies to settle prematurely into a normal science mindset, because organizations are enormously complex (Daft and Lewin 1990). What Daft and Lewin meant is that the behavior of complex systems is surprising and is hard to 1047-7039/99/1003/0216/$05.OO ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 Copyright ? 1999, Institute for Operations Research pp. 216-232 and the Management Sciences PHILIP ANDERSON Complexity Theory and Organization Science predict, because it is nonlinear (Casti 1994). In nonlinear systems, intervening to change one or two parameters a small amount can drastically change the behavior of the whole system, and the whole can be very different from the sum of the parts. Complex systems change inputs to outputs in a nonlinear way because their components interact with one another via a web of feedback loops. Gell-Mann (1994a) defines complexity as the length of the schema needed to describe and predict the properties of an incoming data stream by identifying its regularities. Nonlinear systems can difficult to compress into a parsimonious description: this is what makes them complex (Casti 1994). According to Simon (1996, p. 1), the central task of a natural science is to show that complexity, correctly viewed, is only a mask for simplicity. Both social scientists and people in organizations reduce a complex description of a system to a simpler one by abstracting out what is unnecessary or minor. To build a model is to encode a natural system into a formal system, compressing a longer description into a shorter one that is easier to grasp. Modeling the nonlinear outcomes of many interacting components has been so difficult that both social and natural scientists have tended to select more analytically tractable problems (Casti 1994). Simple boxes-andarrows causal models are inadequate for modeling systems with complex interconnections and feedback loops, even when nonlinear relations between dependent and independent variables are introduced by means of exponents, logarithms, or interaction terms. How else might we compress complex behavior so we can comprehend it? For Perrow (1967), the more complex an organization is, the less knowable it is and the more deeply ambiguous is its operation. Modem complexity theory suggests that some systems with many interactions among highly differentiated parts can produce surprisingly simple, predictable behavior, while others generate behavior that is impossible to forecast, though they feature simple laws and few actors. As Cohen and Stewart (1994) point out, normal science shows how complex effects can be understood from simple laws; chaos theory demonstrates that simple laws can have complicated, unpredictable consequences; and complexity theory describes how complex causes can produce simple effects. Since the mid-1980s, new approaches to modeling complex systems have been emerging from an interdisciplinary invisible college, anchored on the Santa Fe Institute (see Waldrop 1992 for a historical perspective). The agenda of these scholars includes identifying deep principles underlying a wide variety of complex systems, be they physical, biological, or social (Fontana and Ballati 1999). Despite somewhat frequent declarations that a new paradigm has emerged, it is still premature to declare that a science of complexity, or even a unified theory of complex systems, exists (Horgan 1995). Holland and Miller (1991) have likened the present situation to that of evolutionary theory before Fisher developed a mathematical theory of genetic selection. This essay is not a review of the emerging body of research in complex systems, because that has been ably reviewed many times, in ways accessible to both scholars and managers. Table 1 describes a number of recent, prominent books and articles that inform this literature; Heylighen (1997) provides an excellent introductory bibliography, with a more comprehensive version available on the Internet at http://pespmcl.vub.ac.be/ Evocobib. html. Organization science has passed the point where we can regard as novel a summary of these ideas or an assertion that an empirical phenomenon is consistent with them (see Browning et al. 1995 for a pathbreaking example). Six important insights, explained at length in the works cited in Table 1, should be regarded as well-established scientifically. First, many dynamical systems (whose state at time t determines their state at time t + 1) do not reach either a fixed-point or a cyclical equilibrium (see Dooley and Van de Ven's paper in this issue). Second, processes that appear to be random may be chaotic, revolving around identifiable types of attractors in a deterministic way that seldom if ever return to the same state. An attractor is a limited area in a system's state space that it never departs. Chaotic systems revolve around \"strange attractors,\" fractal objects that constrain the system to a small area of its state space, which it explores in a neverending series that does not repeat in a finite amount of time. Tests exist that can establish whether a given process is random or chaotic (Koput 1997, Ott 1993). Similarly, time series that appear to be random walks may actually be fractals with self-reinforcing trends (Bar-Yam 1997). Third, the behavior of complex processes can be quite sensitive to small differences in initial conditions, so that two entities with very similar initial states can follow radically divergent paths over time. Consequently, historical accidents may \"tip\" outcomes strongly in a particular direction (Arthur 1989). Fourth, complex systems resist simple reductionist analyses, because interconnections and feedback loops preclude holding some subsystems constant in order to study others in isolation. Because descriptions at multiple scales are necessary to identify how emergent properties are produced (Bar-Yam 1997), reductionism and holism are complementary strategies in analyzing such systems (Fontana and Ballati ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 217 PHILIP ANDERSON Complexity Theory and Organization Science Table 1 Selected Resources that Provide an Overview of Complexity Theory Allison and Kelly, 1999 Written for managers, this book provides an overview of major themes in complexity theory and discusses practical applications rooted in-experiences at firms such as Citicorp. Bar-Yam, 1997 A very comprehensive introduction for mathematically sophisticated readers, the book discusses the major computational techniques used to analyze complex systems, including spin-glass models, cellular automata, simulation methodologies, and fractal analysis. Models are developed to describe neural networks, protein folding, developmental biology, and the evolution of human civilization. Brown and Eisenhardt, 1998 Although this book is not an introduction to complexity theory, a series of small tables throughout the text introduces and explains most of the important concepts. The purpose of the book is to view stra", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
[ { "docid": "92d3987fc0b5d5962f50871ecc23743e", "text": "Wireless sensor networks (WSNs) have become a hot area of research in recent years due to the realization of their ability in myriad applications including military surveillance, facility monitoring, target detection, and health care applications. However, many WSN design problems involve tradeoffs between multiple conflicting optimization objectives such as coverage preservation and energy conservation. Many of the existing sensor network design approaches, however, generally focus on a single optimization objective. For example, while both energy conservation in a cluster-based WSNs and coverage-maintenance protocols have been extensively studied in the past, these have not been integrated in a multi-objective optimization manner. This paper employs a recently developed multiobjective optimization algorithm, the so-called multi-objective evolutionary algorithm based on decomposition (MOEA/D) to solve simultaneously the coverage preservation and energy conservation design problems in cluster-based WSNs. The performance of the proposed approach, in terms of coverage and network lifetime is compared with a state-of-the-art evolutionary approach called NSGA II. Under the same environments, simulation results on different network topologies reveal that MOEA/D provides a feasible approach for extending the network lifetime while preserving more coverage area.", "title": "" }, { "docid": "681221fa1c48361dfc5916c66580c855", "text": "Until recently, those deep steganalyzers in spatial domain are all designed for gray-scale images. In this paper, we propose WISERNet (the wider separate-then-reunion network) for steganalysis of color images. We provide theoretical rationale to claim that the summation in normal convolution is one sort of “linear collusion attack” which reserves strong correlated patterns while impairs uncorrelated noises. Therefore in the bottom convolutional layer which aims at suppressing correlated image contents, we adopt separate channel-wise convolution without summation instead. Conversely, in the upper convolutional layers we believe that the summation in normal convolution is beneficial. Therefore we adopt united normal convolution in those layers and make them remarkably wider to reinforce the effect of “linear collusion attack”. As a result, our proposed wide-and-shallow, separate-then-reunion network structure is specifically suitable for color image steganalysis. We have conducted extensive experiments on color image datasets generated from BOSSBase raw images, with different demosaicking algorithms and downsampling algorithms. The experimental results show that our proposed network outperform other state-of-the-art color image steganalytic models either hand-crafted or learned using deep networks in the literature by a clear margin. Specifically, it is noted that the detection performance gain is achieved with less than half the complexity compared to the most advanced deeplearning steganalyzer as far as we know, which is scarce in the literature.", "title": "" }, { "docid": "ba36e8232460f64fa48c517b264d7254", "text": "We introduce an extension to CCG that allows form and function to be represented simultaneously, reducing the proliferation of modifier categories seen in standard CCG analyses. We can then remove the non-combinatory rules CCGbank uses to address this problem, producing a grammar that is fully lexicalised and far less ambiguous. There are intrinsic benefits to full lexicalisation, such as semantic transparency and simpler domain adaptation. The clearest advantage is a 52-88% improvement in parse speeds, which comes with only a small reduction in accuracy.", "title": "" }, { "docid": "9775092feda3a71c1563475bae464541", "text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.", "title": "" }, { "docid": "420719690b6249322927153daedba87b", "text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.", "title": "" }, { "docid": "6127d1952432dcf5c2339bf52d70ea0b", "text": "Crystalline metal-organic frameworks (MOFs) are porous frameworks comprising an infinite array of metal nodes connected by organic linkers. The number of novel MOF structures reported per year is now in excess of 6000, despite significant increases in the complexity of both component units and molecular networks. Their regularly repeating structures give rise to chemically variable porous architectures, which have been studied extensively due to their sorption and separation potential. More recently, catalytic applications have been proposed that make use of their chemical tunability, while reports of negative linear compressibility and negative thermal expansion have further expanded interest in the field. Amorphous metal-organic frameworks (aMOFs) retain the basic building blocks and connectivity of their crystalline counterparts, though they lack any long-range periodic order. Aperiodic arrangements of atoms result in their X-ray diffraction patterns being dominated by broad \"humps\" caused by diffuse scattering and thus they are largely indistinguishable from one another. Amorphous MOFs offer many exciting opportunities for practical application, either as novel functional materials themselves or facilitating other processes, though the domain is largely unexplored (total aMOF reported structures amounting to under 30). Specifically, the use of crystalline MOFs to detect harmful guest species before subsequent stress-induced collapse and guest immobilization is of considerable interest, while functional luminescent and optically active glass-like materials may also be prepared in this manner. The ion transporting capacity of crystalline MOFs might be improved during partial structural collapse, while there are possibilities of preparing superstrong glasses and hybrid liquids during thermal amorphization. The tuning of release times of MOF drug delivery vehicles by partial structural collapse may be possible, and aMOFs are often more mechanically robust than crystalline materials, which is of importance for industrial applications. In this Account, we describe the preparation of aMOFs by introduction of disorder into their parent crystalline frameworks through heating, pressure (both hydrostatic and nonhydrostatic), and ball-milling. The main method of characterizing these amorphous materials (analysis of the pair distribution function) is summarized, alongside complementary techniques such as Raman spectroscopy. Detailed investigations into their properties (both chemical and mechanical) are compiled and compared with those of crystalline MOFs, while the impact of the field on the processing techniques used for crystalline MOF powders is also assessed. Crucially, the benefits amorphization may bring to existing proposed MOF applications are detailed, alongside the possibilities and research directions afforded by the combination of the unique properties of the amorphous domain with the versatility of MOF chemistry.", "title": "" }, { "docid": "a3fdbc08bd9b73474319f9bc5c510f85", "text": "With the rapid increase of mobile devices, the computing load of roadside cloudlets is fast growing. When the computation tasks of the roadside cloudlet reach the limit, the overload may generate heat radiation problem and unacceptable delay to mobile users. In this paper, we leverage the characteristics of buses and propose a scalable fog computing paradigm with servicing offloading in bus networks. The bus fog servers not only provide fog computing services for the mobile users on bus, but also are motivated to accomplish the computation tasks offloaded by roadside cloudlets. By this way, the computing capability of roadside cloudlets is significantly extended. We consider an allocation strategy using genetic algorithm (GA). With this strategy, the roadside cloudlets spend the least cost to offload their computation tasks. Meanwhile, the user experience of mobile users are maintained. The simulations validate the advantage of the propose scheme.", "title": "" }, { "docid": "606bc892776616ffd4f9f9dc44565019", "text": "Despite the various attractive features that Cloud has to offer, the rate of Cloud migration is rather slow, primarily due to the serious security and privacy issues that exist in the paradigm. One of the main problems in this regard is that of authorization in the Cloud environment, which is the focus of our research. In this paper, we present a systematic analysis of the existing authorization solutions in Cloud and evaluate their effectiveness against well-established industrial standards that conform to the unique access control requirements in the domain. Our analysis can benefit organizations by helping them decide the best authorization technique for deployment in Cloud; a case study along with simulation results is also presented to illustrate the procedure of using our qualitative analysis for the selection of an appropriate technique, as per Cloud consumer requirements. From the results of this evaluation, we derive the general shortcomings of the extant access control techniques that are keeping them from providing successful authorization and, therefore, widely adopted by the Cloud community. To that end, we enumerate the features an ideal access control mechanisms for the Cloud should have, and combine them to suggest the ultimate solution to this major security challenge — access control as a service (ACaaS) for the software as a service (SaaS) layer. We conclude that a meticulous research is needed to incorporate the identified authorization features into a generic ACaaS framework that should be adequate for providing high level of extensibility and security by integrating multiple access control models.", "title": "" }, { "docid": "61b02ae1994637115e3baec128f05bd8", "text": "Ensuring reliability as the electrical grid morphs into the “smart grid” will require innovations in how we assess the state of the grid, for the purpose of proactive maintenance, rather than reactive maintenance – in the future, we will not only react to failures, but also try to anticipate and avoid them using predictive modeling (machine learning) techniques. To help in meeting this challenge, we present the Neutral Online Visualization-aided Autonomic evaluation framework (NOVA) for evaluating machine learning algorithms for preventive maintenance on the electrical grid. NOVA has three stages provided through a unified user interface: evaluation of input data quality, evaluation of machine learning results, and evaluation of the reliability improvement of the power grid. A prototype version of NOVA has been deployed for the power grid in New York City, and it is able to evaluate machine learning systems effectively and efficiently. Appearing in the ICML 2011 Workshop on Machine Learning for Global Challenges, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).", "title": "" }, { "docid": "a57b2e8b24cced6f8bfad942dd530499", "text": "With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types posses a serious problem for their detection. The human labelling of the available network audit data instances is usually tedious, time consuming and expensive. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.", "title": "" }, { "docid": "6ac6e57937fa3d2a8e319ce17d960c34", "text": "In various application domains there is a desire to compare process models, e.g., to relate an organization-specific process model to a reference model, to find a web service matching some desired service description, or to compare some normative process model with a process model discovered using process mining techniques. Although many researchers have worked on different notions of equivalence (e.g., trace equivalence, bisimulation, branching bisimulation, etc.), most of the existing notions are not very useful in this context. First of all, most equivalence notions result in a binary answer (i.e., two processes are equivalent or not). This is not very helpful, because, in real-life applications, one needs to differentiate between slightly different models and completely different models. Second, not all parts of a process model are equally important. There may be parts of the process model that are rarely activated while other parts are executed for most process instances. Clearly, these should be considered differently. To address these problems, this paper proposes a completely new way of comparing process models. Rather than directly comparing two models, the process models are compared with respect to some typical behavior. This way we are able to avoid the two problems. Although the results are presented in the context of Petri nets, the approach can be applied to any process modeling language with executable semantics.", "title": "" }, { "docid": "d61e481378ee88da7a33cf88bf69dbef", "text": "Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images which can mislead DNNs to give incorrect classification results. Such attack has seriously hampered the deployment of DNN systems in areas where security or safety requirements are strict, such as autonomous cars, face recognition, malware detection. Defensive distillation is a mechanism aimed at training a robust DNN which significantly reduces the effectiveness of adversarial examples generation. However, the state-of-the-art attack can be successful on distilled networks with 100% probability. But it is a white-box attack which needs to know the inner information of DNN. Whereas, the black-box scenario is more general. In this paper, we first propose the -neighborhood attack, which can fool the defensively distilled networks with 100% success rate in the white-box setting, and it is fast to generate adversarial examples with good visual quality. On the basis of this attack, we further propose the regionbased attack against defensively distilled DNNs in the blackbox setting. And we also perform the bypass attack to indirectly break the distillation defense as a complementary method. The experimental results show that our black-box attacks have a considerable success rate on defensively distilled networks.", "title": "" }, { "docid": "c4490ecc0b0fb0641dc41313d93ccf44", "text": "Machine learning predictive modeling algorithms are governed by “hyperparameters” that have no clear defaults agreeable to a wide range of applications. The depth of a decision tree, number of trees in a forest, number of hidden layers and neurons in each layer in a neural network, and degree of regularization to prevent overfitting are a few examples of quantities that must be prescribed for these algorithms. Not only do ideal settings for the hyperparameters dictate the performance of the training process, but more importantly they govern the quality of the resulting predictive models. Recent efforts to move from a manual or random adjustment of these parameters include rough grid search and intelligent numerical optimization strategies. This paper presents an automatic tuning implementation that uses local search optimization for tuning hyperparameters of modeling algorithms in SAS® Visual Data Mining and Machine Learning. The AUTOTUNE statement in the TREESPLIT, FOREST, GRADBOOST, NNET, SVMACHINE, and FACTMAC procedures defines tunable parameters, default ranges, user overrides, and validation schemes to avoid overfitting. Given the inherent expense of training numerous candidate models, the paper addresses efficient distributed and parallel paradigms for training and tuning models on the SAS® ViyaTM platform. It also presents sample tuning results that demonstrate improved model accuracy and offers recommendations for efficient and effective model tuning.", "title": "" }, { "docid": "568c7ef495bfc10936398990e72a04d2", "text": "Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.", "title": "" }, { "docid": "18ffa160ffce386993b5c2da5070b364", "text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.", "title": "" }, { "docid": "51df36570be2707556a8958e16682612", "text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.", "title": "" }, { "docid": "01b3c9758bd68ad68a2f1d262feaa4e8", "text": "A low-voltage-swing MOSFET gate drive technique is proposed in this paper for enhancing the efficiency characteristics of high-frequency-switching dc-dc converters. The parasitic power dissipation of a dc-dc converter is reduced by lowering the voltage swing of the power transistor gate drivers. A comprehensive circuit model of the parasitic impedances of a monolithic buck converter is presented. Closed-form expressions for the total power dissipation of a low-swing buck converter are proposed. The effect of reducing the MOSFET gate voltage swings is explored with the proposed circuit model. A range of design parameters is evaluated, permitting the development of a design space for full integration of active and passive devices of a low-swing buck converter on the same die, for a target CMOS technology. The optimum gate voltage swing of a power MOSFET that maximizes efficiency is lower than a standard full voltage swing. An efficiency of 88% at a switching frequency of 102 MHz is achieved for a voltage conversion from 1.8 to 0.9 V with a low-swing dc-dc converter based on a 0.18-/spl mu/m CMOS technology. The power dissipation of a low-swing dc-dc converter is reduced by 27.9% as compared to a standard full-swing dc-dc converter.", "title": "" }, { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" }, { "docid": "738303da7e26ff4145d32526d44c55a8", "text": "Diffuse large B-cell lymphoma (DLBCL) accounts for approximately 30% of non-Hodgkin lymphoma (NHL) cases in adult series. DLBCL is characterized by marked clinical and biological heterogeneity, encompassing up to 16 distinct clinicopathological entities. While current treatments are effective in 60% to 70% of patients, those who are resistant to treatment continue to die from this disease. An expert panel performed a systematic review of all data on the diagnosis, prognosis, and treatment of DLBCL published in PubMed, EMBASE and MEDLINE up to December 2017. Recommendations were classified in accordance with the Grading of Recommendations Assessment Development and Evaluation (GRADE) framework, and the proposed recommendations incorporated into practical algorithms. Initial discussions between experts began in March 2016, and a final consensus was reached in November 2017. The final document was reviewed by all authors in February 2018 and by the Scientific Committee of the Spanish Lymphoma Group GELTAMO.", "title": "" }, { "docid": "28fbb71fab5ea16ef52611b31fcf1dfa", "text": "Gamification, an emerging idea for using game design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, few research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and, based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users; that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and, at the same time, advance existing theories.", "title": "" } ]
scidocsrr
c22eff7d67f6fc9a9fd3853f762a53eb
Cross-domain Effects of Music and Language Experience on the Representation of Pitch in the Human Auditory Brainstem
[ { "docid": "3654827519075eac6bfe5ee442c6d4b2", "text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.", "title": "" }, { "docid": "6509150b9a7fcf201eb19b98d88adc4f", "text": "The main aim of the present experiment was to determine whether extensive musical training facilitates pitch contour processing not only in music but also in language. We used a parametric manipulation of final notes' or words' fundamental frequency (F0), and we recorded behavioral and electrophysiological data to examine the precise time course of pitch processing. We compared professional musicians and nonmusicians. Results revealed that within both domains, musicians detected weak F0 manipulations better than nonmusicians. Moreover, F0 manipulations within both music and language elicited similar variations in brain electrical potentials, with overall shorter onset latency for musicians than for nonmusicians. Finally, the scalp distribution of an early negativity in the linguistic task varied with musical expertise, being largest over temporal sites bilaterally for musicians and largest centrally and over left temporal sites for nonmusicians. These results are taken as evidence that extensive musical training influences the perception of pitch contour in spoken language.", "title": "" }, { "docid": "908716e7683bdc78283600f63bd3a1b0", "text": "The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and responseand computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.", "title": "" } ]
[ { "docid": "45390290974f347d559cd7e28c33c993", "text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.", "title": "" }, { "docid": "8eb96ae8116a16e24e6a3b60190cc632", "text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.", "title": "" }, { "docid": "3667adb02ff66fee9a77ba02a774f42f", "text": "This report points out a correlation between asthma and dental caries. It also gives certain guidelines on the measures to be taken in an asthmatic to negate the risk of dental caries.", "title": "" }, { "docid": "f4708a4f62cb17a83ed14c65e5f14f32", "text": "Data imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue, conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that have a large degree of class overlap. In this paper, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then, we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small data set with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.", "title": "" }, { "docid": "385ae4c2278c2f4b876bf50941e98998", "text": "Deep neural networks (DNN) have been successfully employed for the problem of monaural sound source separation achieving state-of-the-art results. In this paper, we propose using convolutional recurrent neural network (CRNN) architecture for tackling this problem. We focus on a scenario where low algorithmic delay (< 10 ms) is paramount, and relatively little training data is available. We show that the proposed architecture can achieve slightly better performance as compared to feedforward DNNs and long short-term memory (LSTM) networks. In addition to reporting separation performance metrics (i.e., source to distortion ratios), we also report extended short term objective intelligibility (ESTOI) scores which better predict intelligibility performance in presence of non-stationary interferers.", "title": "" }, { "docid": "a13a50d552572d08b4d1496ca87ac160", "text": "In recent years, mining with imbalanced data sets receives more and more attentions in both theoretical and practical aspects. This paper introduces the importance of imbalanced data sets and their broad application domains in data mining, and then summarizes the evaluation metrics and the existing methods to evaluate and solve the imbalance problem. Synthetic minority oversampling technique (SMOTE) is one of the over-sampling methods addressing this problem. Based on SMOTE method, this paper presents two new minority over-sampling methods, borderline-SMOTE1 and borderline-SMOTE2, in which only the minority examples near the borderline are over-sampled. For the minority class, experiments show that our approaches achieve better TP rate and F-value than SMOTE and random over-sampling methods.", "title": "" }, { "docid": "913709f4fe05ba2783c3176ed00015fe", "text": "A generalization of the PWM (pulse width modulation) subharmonic method for controlling single-phase or three-phase multilevel voltage source inverters (VSIs) is considered. Three multilevel PWM techniques for VSI inverters are presented. An analytical expression of the spectral components of the output waveforms covering all the operating conditions is derived. The analysis is based on an extension of Bennet's method. The improvements in harmonic spectrum are pointed out, and several examples are presented which prove the validity of the multilevel modulation. Improvements in the harmonic contents were achieved due to the increased number of levels.<<ETX>>", "title": "" }, { "docid": "345e46da9fc01a100f10165e82d9ca65", "text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.", "title": "" }, { "docid": "13c250fc46dfc45e9153dbb1dc184b70", "text": "This paper proposes Travel Prediction-based Data forwarding (TPD), tailored and optimized for multihop vehicle-to-vehicle communications. The previous schemes forward data packets mostly utilizing statistical information about road network traffic, which becomes much less accurate when vehicles travel in a light-traffic vehicular network. In this light-traffic vehicular network, highly dynamic vehicle mobility can introduce a large variance for the traffic statistics used in the data forwarding process. However, with the popularity of GPS navigation systems, vehicle trajectories become available and can be utilized to significantly reduce this uncertainty in the road traffic statistics. Our TPD takes advantage of these vehicle trajectories for a better data forwarding in light-traffic vehicular networks. Our idea is that with the trajectory information of vehicles in a target road network, a vehicle encounter graph is constructed to predict vehicle encounter events (i.e., timing for two vehicles to exchange data packets in communication range). With this encounter graph, TPD optimizes data forwarding process for minimal data delivery delay under a specific delivery ratio threshold. Through extensive simulations, we demonstrate that our TPD significantly outperforms existing legacy schemes in a variety of road network settings. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "45bf73a93f0014820864d1805f257bfc", "text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.", "title": "" }, { "docid": "4e7f166d1098b1223f03afa78adc7b46", "text": "This paper builds a theory of trust based on informal contract enforcement in social networks. In our model, network connections between individuals can be used as social collateral to secure informal borrowing. We de…ne network-based trust as the highest amount one agent can borrow from another agent, and derive a reduced-form expression for this quantity which we then use in three applications. (1) We predict that dense networks generate bonding social capital that allows transacting valuable assets, while loose networks create bridging social capital that improves access to cheap favors like information. (2) For job recommendation networks, we show that strong ties between employers and trusted recommenders reduce asymmetric information about the quality of job candidates. (3) Using data from Peru, we show empirically that network-based trust predicts informal borrowing, and we structurally estimate and test our model. E-mails: dean.karlan@yale.edu, mobius@fas.harvard.edu, tanyar@iastate.edu, szeidl@econ.berkeley.edu. We thank Attila Ambrus, Susan Athey, Antoni Calvó-Armengol, Pablo Casas-Arce, Rachel Croson, Avinash Dixit, Drew Fudenberg, Andrea Galeotti, Ed Glaeser, Sanjeev Goyal, Daniel Hojman, Matthew Jackson, Rachel Kranton, Ariel Pakes, Andrea Prat, Michael Schwarz, Andrei Shleifer, Andy Skrzypacz, Fernando Vega-Redondo and seminar participants for helpful comments. A growing body of research demonstrates the importance of trust for economic outcomes.1 Arrow (1974) calls trust “an important lubricant of a social system”. If trust is low, poverty can persist because individuals are unable to acquire capital, even if they have strong investment opportunities. If trust is high, informal transactions can be woven into daily life and help generate e¢ cient allocations of resources. But what determines the level of trust between individuals? In this paper we propose a model where the social network in‡uences how much agents trust each other. Sociologists such as Granovetter (1985), Coleman (1988) and Putnam (2000) have long argued that social networks play an important role in building trust.2 In our model, networks create trust when agents use connections as social collateral to facilitate informal borrowing. The possibility of losing valuable friendships secures informal transactions the same way that the possibility of losing physical collateral can secure formal lending.3 Since both direct and indirect connections can serve as social collateral, the level of trust is determined by the structure of the entire network. Although we present our model in terms of trust over a borrowing transaction, it can also apply to other situations that involve moral hazard or asymmetric information, such as hiring workers through referrals.4 To understand the basic logic of our model, consider the examples in Figure 1, where agent s would like to borrow an asset, like a car, from agent t, in an economy with no formal contract enforcement. In Figure 1A, the network consists only of s and t; the value of their relationship, which represents either the social bene…ts of friendship or the present value of future transactions, is assumed to be 2. As in standard models of informal contracting, t will only lend the car if its value does not exceed the relationship value of 2. More interesting is Figure 1B, where s and t have a common friend u, the value of the friendship between s and u is 3, and that between u and t is 4. Here, the common friend increases the borrowing limit by min [3; 4] = 3, the weakest link on the path connecting borrower and lender through u, to a total of 5. The logic is that the intermediate agent u vouches for the borrower, acting as a guarantor of the loan transaction. If the borrower chooses not to return the car, he is breaking his promise of repayment to u, and therefore loses u’s Trust has been linked with outcomes including economic growth (Knack and Keefer 1997), judicial e¢ ciency and lack of corruption (La Porta, Lopez-de-Silanes, Shleifer, and Vishny 1997), international trade and …nancial ‡ows (Guiso, Sapienza, and Zingales 2008), and private investment (Bohnet, Herrman, and Zeckhauser 2008). Glaeser, Laibson, Scheinkman, and Soutter (2000) show in experiments that social connections increase trust. Field evidence on the role of networks in trust-intensive exchange includes McMillan and Woodru¤ (1999) and Johnson, McMillan, and Woodru¤ (2002) for business transactions in Vietnam and transition countries; Townsend (1994) and Udry (1994) for insurance arrangements in India and Nigeria; and Macaulay (1963) and Uzzi (1999) for …rms in the U.S. We abstract from morality, altruism and other mechansisms that can generate trust even between strangers (e.g., Fukuyama (1995), Berg, Dickhaut, and McCabe (1995)); hence our de…nition of trust is like Hardin’s (1992). 4 In related work, Kandori (1992), Greif (1993) and Ellison (1994) develop models of community enforcement where deviators are punished by all members of society. More recently, Ali and Miller (2008), Bloch, Genicot, and Ray (2005), Dixit (2003) and Lippert and Spagnolo (2006) explore models of informal contracting where networks are used to transmit information. In contrast, in our work the network serves as social collateral.", "title": "" }, { "docid": "1bb21862a8c5c7264933e19ed316499c", "text": "In this paper, we present approximation algorithms for the directed multi-multiway cut and directed multicut problems. The so called region growing paradigm [1] is modified and used for these two cut problems on directed graphs By this paradigm, we give for each problem an approximation algorithm such that both algorithms have an approximate factor. The work previously done on these problems need to solve k linear programming, whereas our algorithms require only one linear programming for obtaining a good approximate factor.", "title": "" }, { "docid": "c7ff67367986a0c7447045cae18fa43a", "text": "Wireless Power Transfer (WPT) technology is a novel research area in the charging technology that bridges the utility and the automotive industries. There are various solutions that are currently being evaluated by several research teams to find the most efficient way to manage the power flow from the grid to the vehicle energy storage system. There are different control parameters that can be utilized to compensate for the change in the impedance due to variable parameters such as battery state-of-charge, coupling factor, and coil misalignment. This paper presents the implementation of an active front-end rectifier on the grid side for power factor control and voltage boost capability for load power regulation. The proposed SiC MOSFET based single phase active front end rectifier with PFC resulted in >97% efficiency at 137mm air-gap and >95% efficiency at 160mm air-gap.", "title": "" }, { "docid": "7f479783ccab6c705bc1d76533f0b1c6", "text": "The purpose of this research, computerized hotel management system with Satellite Motel Ilorin, Nigeria as the case study is to understand and make use of the computer to solve some of the problems which are usually encountered during manual operations of the hotel management. Finding an accommodation or a hotel after having reached a particular destination is quite time consuming as well as expensive. Here comes the importance of online hotel booking facility. Online hotel booking is one of the latest techniques in the arena of internet that allows travelers to book a hotel located anywhere in the world and that too according to your tastes and preferences. In other words, online hotel booking is one of the awesome facilities of the internet. Booking a hotel online is not only fast as well as convenient but also very cheap. Nowadays, many of the hotel providers have their sites on the web, which in turn allows the users to visit these sites and view the facilities and amenities offered by each of them. So, the proposed computerized of an online hotel management system is set to find a more convenient, well organized, faster, reliable and accurate means of processing the current manual system of the hotel for both near and far customer.", "title": "" }, { "docid": "8bd5d94ed7b92845abae07a636cce185", "text": "The media world of today's youth is almost completely digital. With newspapers going online and television becoming increasingly digital, the current generation of youth has little reason to consume analog media. Music, movies, and all other forms of mass-mediated content can be obtained via a wide array of digital devices, ranging from CDs to DVDs, from iPods to PDAs. Even their nonmedia experiences are often characterized by a reliance on digital devices. Most young people communicate with most of their acquaintances through cell phones and computer-mediated communication tools such as instant messengers and e-mail systems. 1 And, with the arrival of personal broadcasting technologies such as blogs and social networking sites, many youngsters experience the world through their own self-expression and the expressions of their peers. This serves to blur the traditional boundary between interpersonal and mass communication, leading to an idiosyncratic construction of one's media world. Customization in the digital age—be it in the form of Web sites such as cus-tomizable portals that allow users to shape content or devices such as iPods that allow for customized playlists—enables the user to serve as the gatekeeper of content. As media get highly interactive, multimodal, and navigable, the receiver tends to become the source of communication. 2 While this leads naturally to egocentric construals of one's information environment, it also raises questions about the veracity of all the material that is consumed. The ease of digital publishing has made authors out of us all, leading to a dramatic profusion of information available for personal as well as public consumption. Much of this information, however, is free-floating and does not follow any universally accepted gatekeeping standards, let alone a professional process of writing and editing. Therefore, the veridicality of information accessed on the Web and other digital media is often suspect. 3 This makes credibility a supremely key concern in the new media environment, necessitating the constant need to critically assess information while consuming it. Credibility is classically ascertained by considering the source of information. If the attributed source of a piece of information is a credible person or organization, then, according to conventional wisdom, that information is probably reliable. However, in Internet-based media, source is a murky entity because there are often multiple layers of sources in online transmission of information (e.g., e-mail from a friend giving you a piece of information that he or she found on a newsgroup, posted …", "title": "" }, { "docid": "544a5a95a169b9ac47960780ac09de80", "text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗shihchie@ualberta.ca †mmueller@ualberta.ca", "title": "" }, { "docid": "a4418b6e010a630a8ae1f10ce23e0ec5", "text": "While neural machine translation (NMT) has made remarkable progress in recent years, it is hard to interpret its internal workings due to the continuous representations and non-linearity of neural networks. In this work, we propose to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoderdecoder framework. We show that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors.", "title": "" }, { "docid": "184319fbdee41de23718bb0831c53472", "text": "Localization is a prominent application and research area in Wireless Sensor Networks. Various research studies have been carried out on localization techniques and algorithms in order to improve localization accuracy. Received signal strength indicator is a parameter, which has been widely used in localization algorithms in many research studies. There are several environmental and other factors that affect the localization accuracy and reliability. This study introduces a new technique to increase the localization accuracy by employing a dynamic distance reference anchor method. In order to investigate the performance improvement obtained with the proposed technique, simulation models have been developed, and results have been analyzed. The simulation results show that considerable improvement in localization accuracy can be achieved with the proposed model.", "title": "" }, { "docid": "82e6da590f8f836c9a06c26ef4440005", "text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.", "title": "" } ]
scidocsrr
9f4188197a105d99f6ef0bf2663b7f78
Retinal Optic Disc Segmentation Using Conditional Generative Adversarial Network
[ { "docid": "d8cc257b156a618b10b97db70306dcfe", "text": "This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.", "title": "" } ]
[ { "docid": "e5ad17a5e431c8027ae58337615a60bd", "text": "In this paper, we focus on learning structure-aware document representations from data without recourse to a discourse parser or additional annotations. Drawing inspiration from recent efforts to empower neural networks with a structural bias (Cheng et al., 2016; Kim et al., 2017), we propose a model that can encode a document while automatically inducing rich structural dependencies. Specifically, we embed a differentiable non-projective parsing algorithm into a neural model and use attention mechanisms to incorporate the structural biases. Experimental evaluations across different tasks and datasets show that the proposed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both interpretable and meaningful.", "title": "" }, { "docid": "251210e932884c2103f7f2d71c5ec519", "text": "Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.", "title": "" }, { "docid": "37aca8c5ec945d4a91984683538b0bc6", "text": "Little is known about the neurobiological mechanisms underlying prosocial decisions and how they are modulated by social factors such as perceived group membership. The present study investigates the neural processes preceding the willingness to engage in costly helping toward ingroup and outgroup members. Soccer fans witnessed a fan of their favorite team (ingroup member) or of a rival team (outgroup member) experience pain. They were subsequently able to choose to help the other by enduring physical pain themselves to reduce the other's pain. Helping the ingroup member was best predicted by anterior insula activation when seeing him suffer and by associated self-reports of empathic concern. In contrast, not helping the outgroup member was best predicted by nucleus accumbens activation and the degree of negative evaluation of the other. We conclude that empathy-related insula activation can motivate costly helping, whereas an antagonistic signal in nucleus accumbens reduces the propensity to help.", "title": "" }, { "docid": "ce863c10e38ca976f0f994b3d1c4f9f1", "text": "Grammatical inference – used successfully in a variety of fields such as pattern recognition, computational biology and natural language processing – is the process of automatically inferring a grammar by examining the sentences of an unknown language. Software engineering can also benefit from grammatical inference. Unlike these other fields, which use grammars as a convenient tool to model naturally occuring patterns, software engineering treats grammars as first-class objects typically created and maintained for a specific purpose by human designers. We introduce the theory of grammatical inference and review the state of the art as it relates to software engineering.", "title": "" }, { "docid": "91f20c48f5a4329260aadb87a0d8024c", "text": "In this paper, we survey key design for manufacturing issues for extreme scaling with emerging nanolithography technologies, including double/multiple patterning lithography, extreme ultraviolet lithography, and electron-beam lithography. These nanolithography and nanopatterning technologies have different manufacturing processes and their unique challenges to very large scale integration (VLSI) physical design, mask synthesis, and so on. It is essential to have close VLSI design and underlying process technology co-optimization to achieve high product quality (power/performance, etc.) and yield while making future scaling cost-effective and worthwhile. Recent results and examples will be discussed to show the enablement and effectiveness of such design and process integration, including lithography model/analysis, mask synthesis, and lithography friendly physical design.", "title": "" }, { "docid": "1f5a30218a65e79bdfffb2c2d7dfcc30", "text": "A lot of applications depend on reliable and stable Internet connectivity. These characteristics are crucial for missioncritical services such as telemedical applications. An important factor that can affect connection availability is the convergence time of BGP, the de-facto inter-domain routing (IDR) protocol in the Internet. After a routing change, it may take several minutes until the network converges and BGP routing becomes stable again [13]. Kotronis et al. [8,9] propose a novel Internet routing approach based on SDN principles that combines several Autonomous Systems (AS) into groups, called clusters, and introduces a logically centralized routing decision process for the cluster participants. One of the goals of this concept is to stabilize the IDR system and bring down its convergence time. However, testing whether such approaches can improve on BGP problems requires hybrid SDN and BGP experimentation tools that can emulate multiple ASes. Presently, there is a lack of an easy to use public tool for this purpose. This work fills this gap by building a suitable emulation framework and evaluating the effect that a proof-of-concept IDR controller has on IDR convergence time.", "title": "" }, { "docid": "a1dec377f2f17a508604d5101a5b0e44", "text": "The goal of this work is to develop a soft robotic manipulation system that is capable of autonomous, dynamic, and safe interactions with humans and its environment. First, we develop a dynamic model for a multi-body fluidic elastomer manipulator that is composed entirely from soft rubber and subject to the self-loading effects of gravity. Then, we present a strategy for independently identifying all unknown components of the system: the soft manipulator, its distributed fluidic elastomer actuators, as well as drive cylinders that supply fluid energy. Next, using this model and trajectory optimization techniques we find locally optimal open-loop policies that allow the system to perform dynamic maneuvers we call grabs. In 37 experimental trials with a physical prototype, we successfully perform a grab 92% of the time. By studying such an extreme example of a soft robot, we can begin to solve hard problems inhibiting the mainstream use of soft machines.", "title": "" }, { "docid": "4a5a5958eaf3a011a04d4afc1155e521", "text": "1 Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America, 2 Microsoft Research, New York, New York, United States of America, 3 Data & Society, New York, New York, United States of America, 4 Information Law Institute, New York University, New York, New York, United States of America, 5 Department of Media and Communications, London School of Economics, London, United Kingdom, 6 Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America, 7 Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America, 8 Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America, 9 Ethical Resolve, Santa Cruz, California, United States of America, 10 Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America, 11 Department of Sociology, Columbia University, New York, New York, United States of America, 12 Carey School of Law, University of Maryland, Baltimore, Maryland, United States of America", "title": "" }, { "docid": "e8babc224158f04da2eccd13a4b14b76", "text": "SFI Working Papers contain accounts of scienti5ic work of the author(s) and do not necessarily represent the views of the Santa Fe Institute. We accept papers intended for publication in peer-­‐reviewed journals or proceedings volumes, but not papers that have already appeared in print. Except for papers by our external faculty, papers must be based on work done at SFI, inspired by an invited visit to or collaboration at SFI, or funded by an SFI grant.", "title": "" }, { "docid": "cffce89fbb97dc1d2eb31a060a335d3c", "text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.", "title": "" }, { "docid": "f7d023abf0f651177497ae38d8494efc", "text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.", "title": "" }, { "docid": "30aa4e82b5e8a8fb3cc7bea65f389014", "text": "Numerous studies on the mechanisms of ankle injury deal with injuries to the syndesmosis and anterior ligamentous structures but a previous sectioning study also describes the important role of the posterior talofibular ligament (PTaFL) in the ankle's resistance to external rotation of the foot. It was hypothesized that failure level external rotation of the foot would lead to injury of the PTaFL. Ten ankles were tested by externally rotating the foot until gross injury. Two different frequencies of rotation were used in this study, 0.5 Hz and 2 Hz. The mean failure torque of the ankles was 69.5+/-11.7 Nm with a mean failure angle of 40.7+/-7.3 degrees . No effects of rotation frequency or flexion angle were noted. The most commonly injured structure was the PTaFL. Visible damage to the syndesmosis only occurred in combination with fibular fracture in these experiments. The constraint of the subtalar joint in the current study may have affected the mechanics of the foot and led to the resultant strain in the PTaFL. In the real world, talus rotations may be affected by athletic footwear that may influence the location and potential for an ankle injury under external rotation of the foot.", "title": "" }, { "docid": "173f5497089e86c29075df964891ca13", "text": "Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The pruning process attempts to eliminate as many connections as possible from the network. Relevant and irrelevant attributes of the data are distinguished during the training process. Those that are relevant will be kept and others will be automatically discarded. From the simplified networks having small number of connections and nodes we may easily able to extract symbolic rules using the proposed algorithm. Extensive experimental results on several benchmarks problems in neural networks demonstrate the effectiveness of the proposed approach with good generalization ability.", "title": "" }, { "docid": "bbed3608cbae4ce9d21fa3c1413ecff6", "text": "In this paper, a new improved plate detection method which uses genetic algorithm (GA) is proposed. GA randomly scans an input image using a fixed detection window repeatedly, until a region with the highest evaluation score is obtained. The performance of the genetic algorithm is evaluated based on the area coverage of pixels in an input image. It was found that the GA can cover up to 90% of the input image in just less than an average of 50 iterations using 30×130 detection window size, with 20 population members per iteration. Furthermore, the algorithm was tested on a database that contains 1537 car images. Out of these images, more than 98% of the plates were successfully detected.", "title": "" }, { "docid": "f20c0ace77f7b325d2ae4862d300d440", "text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: xlzheng@zju.edu.cn (X. Zheng), nblin@zju.edu.cn (Z. Lin), alexwang@zju.edu.cn (X. Wang), klin@ece.uci.edu (K.-J. Lin), mnsong@bupt.edu.cn (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e", "title": "" }, { "docid": "fb0875ee874dc0ada51d0097993e16c8", "text": "The literature on testing effects is vast but supports surprisingly few prescriptive conclusions for how to schedule practice to achieve both durable and efficient learning. Key limitations are that few studies have examined the effects of initial learning criterion or the effects of relearning, and no prior research has examined the combined effects of these 2 factors. Across 3 experiments, 533 students learned conceptual material via retrieval practice with restudy. Items were practiced until they were correctly recalled from 1 to 4 times during an initial learning session and were then practiced again to 1 correct recall in 1-5 subsequent relearning sessions (across experiments, more than 100,000 short-answer recall responses were collected and hand-scored). Durability was measured by cued recall and rate of relearning 1-4 months after practice, and efficiency was measured by total practice trials across sessions. A consistent qualitative pattern emerged: The effects of initial learning criterion and relearning were subadditive, such that the effects of initial learning criterion were strong prior to relearning but then diminished as relearning increased. Relearning had pronounced effects on long-term retention with a relatively minimal cost in terms of additional practice trials. On the basis of the overall patterns of durability and efficiency, our prescriptive conclusion for students is to practice recalling concepts to an initial criterion of 3 correct recalls and then to relearn them 3 times at widely spaced intervals.", "title": "" }, { "docid": "562f0d3835fbd8c79dfef72c2bf751b4", "text": "Alzheimer’s disease (AD) is the most common age-related neurodegenerative disease and has become an urgent public health problem in most areas of the world. Substantial progress has been made in understanding the basic neurobiology of AD and, as a result, new drugs for its treatment have become available. Cholinesterase inhibitors (ChEIs), which increase the availability of acetylcholine in central synapses, have become the main approach to symptomatic treatment. ChEIs that have been approved or submitted to the US Food and Drug Administration (FDA) include tacrine, donepezil, metrifonate, rivastigmine and galantamine. In this review we discuss their pharmacology, clinical experience to date with their use and their potential benefits or disadvantages. ChEIs have a significant, although modest, effect on the cognitive status of patients with AD. In addition to their effect on cognition, ChEIs have a positive effect on mood and behaviour. Uncertainty remains about the duration of the benefit because few studies of these compounds beyond one year have been published. Although ChEIs are generally well tolerated, all patients should be followed closely for possible adverse effects. There is no substantial difference in the effectivenes of the various ChEIs, however, they may have different safety profiles. We believe the benefits of their use outweigh the risks and costs and, therefore, ChEIs should be considered as primary therapy for patients with mild to moderate AD.", "title": "" }, { "docid": "b42788c688193d653bd77379375531ed", "text": "Despite existing work on ensuring generalization of neural networks in terms of scale sensitive complexity measures, such as norms, margin and sharpness, these complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization. In this work we suggest a novel complexity measure based on unit-wise capacities resulting in a tighter generalization bound for two layer ReLU networks. Our capacity bound correlates with the behavior of test error with increasing network sizes, and could potentially explain the improvement in generalization with over-parametrization. We further present a matching lower bound for the Rademacher complexity that improves over previous capacity lower bounds for neural networks.", "title": "" }, { "docid": "ec323459d1bd85c80bc54dc9114fd8b8", "text": "The hype around mobile payments has been growing in Sri Lanka with the exponential growth of the mobile adoption and increasing connectivity to the Internet. Mobile payments offer advantages in comparison to other payment modes, benefiting both the consumer and the society at large. Drawing upon the traditional technology adoption theories, this research develops a conceptual framework to uncover the influential factors fundamental to the mobile payment usage. The phenomenon discussed in this research is the factors influencing the use of mobile payments. In relation to the topic, nine independent factors were selected and their influence is to be tested onto behavioral intention to use mobile payments. The questionnaires need to be handed out for data collection for correlation analyses to track the relationship between the nine independent variables and the dependent variable — behavioral intention to use mobile payments. The second correlation analysis between behavioral intention to mobile payments and mobile payment usage is also to be checked together with the two moderating variables — age and level of education.", "title": "" }, { "docid": "f8a5fb5f323f036d38959f97815337a5", "text": "OBJECTIVE\nEarly screening of autism increases the chance of receiving timely intervention. Using the Parent Report Questionnaires is effective in screening autism. The Q-CHAT is a new instrument that has shown several advantages than other screening tools. Because there is no adequate tool for the early screening of autistic traits in Iranian children, we aimed to investigate the adequacy of the Persian translation of Q-CHAT.\n\n\nMETHOD\nAt first, we prepared the Persian translation of the Quantitative Checklist for Autism in Toddlers (Q-CHAT). After that, an appropriate sample was selected and the check list was administered. Our sample included 100 children in two groups (typically developing and autistic children) who had been selected conveniently. Pearson's r was used to determine test-retest reliability, and Cronbach's alpha coefficient was used to explore the internal consistency of Q-CHAT. We used the receiver operating characteristics curve (ROC) to investigate whether Q-CHAT can adequately discriminate between typically developing and ASD children or not. Data analysis was carried out by SPSS 19.\n\n\nRESULT\nThe typically developing group consisted of 50 children with the mean age of 27.14 months, and the ASD group included50 children with the mean age of 29.62 months. The mean of the total score for the typically developing group was 22.4 (SD=6.26) on Q-CHAT and it was 50.94 (SD=12.35) for the ASD group, which was significantly different (p=0.00).The Cronbach's alpha coefficient of the checklist was 0.886, and test-retest reliability was calculated as 0.997 (p<0.01). The estimated area under the curve (AUC) was 0.971. It seems that the total score equal to 30 can be a good cut point to identify toddlers who are at risk of autism (sensitivity= 0.96 and specificity= 0.90).\n\n\nCONCLUSION\nThe Persian translation of Q-CHAT has good reliability and predictive validity and can be used as a screening tool to detect 18 to 24 months old children who are at risk of autism.", "title": "" } ]
scidocsrr
c62b1f1af2bc05477a8089ff832b7d04
802.11 Denial-of-Service Attacks: Real Vulnerabilities and Practical Solutions
[ { "docid": "326cb7464df9c9361be4e27d82f61455", "text": "We implemented an attack against WEP, the link-layer security protocol for 802.11 networks. The attack was described in a recent paper by Fluhrer, Mantin, and Shamir. With our implementation, and permission of the network administrator, we were able to recover the 128 bit secret key used in a production network, with a passive attack. The WEP standard uses RC4 IVs improperly, and the attack exploits this design failure. This paper describes the attack, how we implemented it, and some optimizations to make the attack more efficient. We conclude that 802.11 WEP is totally insecure, and we provide some recommendations.", "title": "" } ]
[ { "docid": "248adf4ee726dce737b7d0cbe3334ea3", "text": "People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search.", "title": "" }, { "docid": "44f91387bef2faf4964fa97ba53292db", "text": "In this work, a nonlinear model predictive controller is developed for a batch polymerization process. The physical model of the process is parameterized along a desired trajectory resulting in a trajectory linearized piecewise model (a multiple linear model bank) and the parameters are identified for an experimental polymerization reactor. Then, a multiple model adaptive predictive controller is designed for thermal trajectory tracking of the MMA polymerization. The input control signal to the process is constrained by the maximum thermal power provided by the heaters. The constrained optimization in the model predictive controller is solved via genetic algorithms to minimize a DMC cost function in each sampling interval.", "title": "" }, { "docid": "6c0f3240b86677a0850600bf68e21740", "text": "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https://github.com/layumi/2016_person_re-ID.", "title": "" }, { "docid": "fae60b86d98a809f876117526106719d", "text": "Big Data security analysis is commonly used for the analysis of large volume security data from an organisational perspective, requiring powerful IT infrastructure and expensive data analysis tools. Therefore, it can be considered to be inaccessible to the vast majority of desktop users and is difficult to apply to their rapidly growing data sets for security analysis. A number of commercial companies offer a desktop-oriented big data security analysis solution; however, most of them are prohibitive to ordinary desktop users with respect to cost and IT processing power. This paper presents an intuitive and inexpensive big data security analysis approach using Computational Intelligence (CI) techniques for Windows desktop users, where the combination of Windows batch programming, EmEditor and R are used for the security analysis. The simulation is performed on a real dataset with more than 10 million observations, which are collected from Windows Firewall logs to demonstrate how a desktop user can gain insight into their abundant and untouched data and extract useful information to prevent their system from current and future security threats. This CI-based big data security analysis approach can also be extended to other types of security logs such as event logs, application logs and web logs.", "title": "" }, { "docid": "13cfc33bd8611b3baaa9be37ea9d627e", "text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.", "title": "" }, { "docid": "b1e431f48c52a267c7674b5526d9ee23", "text": "Publish/subscribe is a distributed interaction paradigm well adapted to the deployment of scalable and loosely coupled systems.\n Apache Kafka and RabbitMQ are two popular open-source and commercially-supported pub/sub systems that have been around for almost a decade and have seen wide adoption. Given the popularity of these two systems and the fact that both are branded as pub/sub systems, two frequently asked questions in the relevant online forums are: how do they compare against each other and which one to use?\n In this paper, we frame the arguments in a holistic approach by establishing a common comparison framework based on the core functionalities of pub/sub systems. Using this framework, we then venture into a qualitative and quantitative (i.e. empirical) comparison of the common features of the two systems. Additionally, we also highlight the distinct features that each of these systems has. After enumerating a set of use cases that are best suited for RabbitMQ or Kafka, we try to guide the reader through a determination table to choose the best architecture given his/her particular set of requirements.", "title": "" }, { "docid": "2f20e5792104b67143b7dcc43954317e", "text": "Resource Description Framework (RDF) was designed with the initial goal of developing metadata for the Internet. While the Internet is a conglomeration of many interconnected networks and computers, most of today's best RDF storage solutions are confined to a single node. Working on a single node has significant scalability issues, especially considering the magnitude of modern day data. In this paper we introduce a scalable RDF data management system that uses Accumulo, a Google Bigtable variant. We introduce storage methods, indexing schemes, and query processing techniques that scale to billions of triples across multiple nodes, while providing fast and easy access to the data through conventional query mechanisms such as SPARQL. Our performance evaluation shows that in most cases, our system outperforms existing distributed RDF solutions, even systems much more complex than ours.", "title": "" }, { "docid": "1c337dd1935eac802be148b7cb9e671f", "text": "In this paper, we propose generating artificial data that retain statistical properties of real data as the means of providing privacy for the original dataset. We use generative adversarial networks to draw privacy-preserving artificial data samples and derive an empirical method to assess the risk of information disclosure in a differential-privacy-like way. Our experiments show that we are able to generate labelled data of high quality and use it to successfully train and validate supervised models. Finally, we demonstrate that our approach significantly reduces vulnerability of such models to model inversion attacks.", "title": "" }, { "docid": "67808f54305bc2bb2b3dd666f8b4ef42", "text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.", "title": "" }, { "docid": "47dc7c546c4f0eb2beb1b251ef9e4a81", "text": "In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce S TL /PSL, a specification formalism based on the industrial standard language P SL and the real-time temporal logic MITL , extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL /PSL specification and checks, in an offlineor incrementalfashion, whether simulation traces satisfy the property. The AMT tool is validated through a Fla sh memory case-study.", "title": "" }, { "docid": "3989aa85b78b211e3d6511cf5fb607bd", "text": "The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.", "title": "" }, { "docid": "e1429e1dd862d3687d75c4aac63ae907", "text": "Relational DBMSs remain the main data management technology, despite the big data analytics and no-SQL waves. On the other hand, for data analytics in a broad sense, there are plenty of non-DBMS tools including statistical languages, matrix packages, generic data mining programs and large-scale parallel systems, being the main technology for big data analytics. Such large-scale systems are mostly based on the Hadoop distributed file system and MapReduce. Thus it would seem a DBMS is not a good technology to analyze big data, going beyond SQL queries, acting just as a reliable and fast data repository. In this survey, we argue that is not the case, explaining important research that has enabled analytics on large databases inside a DBMS. However, we also argue DBMSs cannot compete with parallel systems like MapReduce to analyze web-scale text data. Therefore, each technology will keep influencing each other. We conclude with a proposal of long-term research issues, considering the \"big data analytics\" trend.", "title": "" }, { "docid": "ec90e30c0ae657f25600378721b82427", "text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.", "title": "" }, { "docid": "6f166a5ba1916c5836deb379481889cd", "text": "Microbial activities drive the global nitrogen cycle, and in the past few years, our understanding of nitrogen cycling processes and the micro-organisms that mediate them has changed dramatically. During this time, the processes of anaerobic ammonium oxidation (anammox), and ammonia oxidation within the domain Archaea, have been recognized as two new links in the global nitrogen cycle. All available evidence indicates that these processes and organisms are critically important in the environment, and particularly in the ocean. Here we review what is currently known about the microbial ecology of anaerobic and archaeal ammonia oxidation, highlight relevant unknowns and discuss the implications of these discoveries for the global nitrogen and carbon cycles.", "title": "" }, { "docid": "deb1c65a6e2dfb9ab42f28c74826309c", "text": "Large knowledge bases consisting of entities and relationships between them have become vital sources of information for many applications. Most of these knowledge bases adopt the Semantic-Web data model RDF as a representation model. Querying these knowledge bases is typically done using structured queries utilizing graph-pattern languages such as SPARQL. However, such structured queries require some expertise from users which limits the accessibility to such data sources. To overcome this, keyword search must be supported. In this paper, we propose a retrieval model for keyword queries over RDF graphs. Our model retrieves a set of subgraphs that match the query keywords, and ranks them based on statistical language models. We show that our retrieval model outperforms the-state-of-the-art IR and DB models for keyword search over structured data using experiments over two real-world datasets.", "title": "" }, { "docid": "18247ea0349da81fe2cf93b3663b081f", "text": "Nowadays, more and more companies migrate business from their own servers to the cloud. With the influx of computational requests, datacenters consume tremendous energy every day, attracting great attention in the energy efficiency dilemma. In this paper, we investigate the energy-aware resource management problem in cloud datacenters, where green energy with unpredictable capacity is connected. Via proposing a robust blockchain-based decentralized resource management framework, we save the energy consumed by the request scheduler. Moreover, we propose a reinforcement learning method embedded in a smart contract to further minimize the energy cost. Because the reinforcement learning method is informed from the historical knowledge, it relies on no request arrival and energy supply. Experimental results on Google cluster traces and real-world electricity price show that our approach is able to reduce the datacenters cost significantly compared with other benchmark algorithms.", "title": "" }, { "docid": "8109594325601247cdb253dbb76b9592", "text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.", "title": "" }, { "docid": "098da928abe37223e0eed0c6bf0f5747", "text": "With the proliferation of social media, fashion inspired from celebrities, reputed designers as well as fashion influencers has shortned the cycle of fashion design and manufacturing. However, with the explosion of fashion related content and large number of user generated fashion photos, it is an arduous task for fashion designers to wade through social media photos and create a digest of trending fashion. Designers do not just wish to have fashion related photos at one place but seek search functionalities that can let them search photos with natural language queries such as ‘red dress’, ’vintage handbags’, etc in order to spot the trends. This necessitates deep parsing of fashion photos on social media to localize and classify multiple fashion items from a given fashion photo. While object detection competitions such as MSCOCO have thousands of samples for each of the object categories, it is quite difficult to get large labeled datasets for fast fashion items. Moreover, state-of-the-art object detectors [2, 7, 9] do not have any functionality to ingest large amount of unlabeled data available on social media in order to fine tune object detectors with labeled datasets. In this work, we show application of a generic object detector [11], that can be pretrained in an unsupervised manner, on 24 categories from recently released Open Images V4 dataset. We first train the base architecture of the object detector using unsupervisd learning on 60K unlabeled photos from 24 categories gathered from social media, and then subsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset. On 300 × 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K photos while performing 11% to 17% better as compared to the state-of-the-art object detectors. We show that this improvement is due to our choice of architecture that lets us do unsupervised learning and that performs significantly better in identifying small objects. 1", "title": "" }, { "docid": "dc20d4cac40923be1ba1a706e1fb5abf", "text": "We have implemented and evaluated a method to populate a company ontology, focusing on hierarchical relations such as acquisitions or subsidiaries. Our method searches for information about user-specified companies on the Internet using a search engine API (Google Custom Search API). From the resulted snippets we identify companies using machine learning and extract relations between them using a set of manually defined semantic patterns. We developed filtering methods both for companies and unlikely relations and from the set of company and relation instances we build this way, we construct an ontology addressing identity matching and consistency problems in a company-specific manner. We achieved a precision of 77 to 93 percent, depending on the evaluated relations.", "title": "" }, { "docid": "c22d7b209a107c501aa09e7d16a93008", "text": "With a growing number of courses offered online and degrees offered through the Internet, there is a considerable interest in online education, particularly as it relates to the quality of online instruction. The major concerns are centering on the following questions: What will be the new role for instructors in online education? How will students' learning outcomes be assured and improved in online learning environment? How will effective communication and interaction be established with students in the absence of face-to-face instruction? How will instructors motivate students to learn in the online learning environment? This paper will examine new challenges and barriers for online instructors, highlight major themes prevalent in the literature related to “quality control or assurance” in online education, and provide practical strategies for instructors to design and deliver effective online instruction. Recommendations will be made on how to prepare instructors for quality online instruction.", "title": "" } ]
scidocsrr
8a36a7b27bf1715dda981a63bf1764e5
Hiding Data in Video Sequences using LSB with Elliptic Curve Cryptography
[ { "docid": "8c8a100e4dc69e1e68c2bd55f010656d", "text": "In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a signi7cant improvement with respect to a previous work. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "73862c0aa60c03d5a96f755cdc3bf07b", "text": "Adaptive and innovative application of classical data mining principles and techniques in time series analysis has resulted in development of a concept known as time series data mining. Since the time series are present in all areas of business and scientific research, attractiveness of mining of time series datasets should not be seen only in the context of the research challenges in the scientific community, but also in terms of usefulness of the research results, as a support to the process of business decision-making. A fundamental component in the mining process of time series data is time series segmentation. As a data mining research problem, segmentation is focused on the discovery of rules in movements of observed phenomena in a form of interpretable, novel, and useful temporal patterns. In this Paper, a comprehensive review of the conceptual determinations, including the elements of comparative analysis, of the most commonly used algorithms for segmentation of time series, is being considered.", "title": "" }, { "docid": "899b3bcf6eaaa02e597499862641f868", "text": "Crowdsourcing systems are popular for solving large-scale labeling tasks with low-paid workers. We study the problem of recovering the true labels from the possibly erroneous crowdsourced labels under the popular Dawid–Skene model. To address this inference problem, several algorithms have recently been proposed, but the best known guarantee is still significantly larger than the fundamental limit. We close this gap by introducing a tighter lower bound on the fundamental limit and proving that the belief propagation (BP) exactly matches the lower bound. The guaranteed optimality of BP is the strongest in the sense that it is information-theoretically impossible for any other algorithm to correctly label a larger fraction of the tasks. Experimental results suggest that the BP is close to optimal for all regimes considered and improves upon competing the state-of-the-art algorithms.", "title": "" }, { "docid": "fef24d203d0a2e5d52aa887a0a442cf3", "text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.", "title": "" }, { "docid": "63a75bf6cdb340cf328b87feb4f0ee22", "text": "A large number of e-commerce websites have started to markup their products using standards such as Microdata, Microformats, and RDFa. However, the markup is mostly not as fine-grained as desirable for applications and mostly consists of free text properties. This paper discusses the challenges that arise in the task of matching descriptions of electronic products from several thousand e-shops that offer Microdata markup. Specifically, our goal is to extract product attributes from product offers, by means of regular expressions, in order to build well structured product specifications. For this purpose we present a technique for learning regular expressions. We evaluate our attribute extraction approach using 1.9 million product offers from 9,240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus. Our results show that with our approach we are able to reach a similar matching quality as with manually defined regular expressions.", "title": "" }, { "docid": "781bdc522ed49108cd7132a9aaf49fce", "text": "ROC curve analysis is often applied to measure the diagnostic accuracy of a biomarker. The analysis results in two gains: diagnostic accuracy of the biomarker and the optimal cut-point value. There are many methods proposed in the literature to obtain the optimal cut-point value. In this study, a new approach, alternative to these methods, is proposed. The proposed approach is based on the value of the area under the ROC curve. This method defines the optimal cut-point value as the value whose sensitivity and specificity are the closest to the value of the area under the ROC curve and the absolute value of the difference between the sensitivity and specificity values is minimum. This approach is very practical. In this study, the results of the proposed method are compared with those of the standard approaches, by using simulated data with different distribution and homogeneity conditions as well as a real data. According to the simulation results, the use of the proposed method is advised for finding the true cut-point.", "title": "" }, { "docid": "5b134fae94a5cc3a2e1b7cc19c5d29e5", "text": "We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.", "title": "" }, { "docid": "d7345ac01159101a7b1264f844fcc9e1", "text": "Neural networks have become very popular in recent years because of the astonishing success of deep learning in various domains such as image and speech recognition. In many of these domains, specific architectures of neural networks, such as convolutional networks, seem to fit the particular structure of the problem domain very well, and can therefore perform in an astonishingly effective way. However, the success of neural networks is not universal across all domains. Indeed, for learning problems without any special structure, or in cases where the data is somewhat limited, neural networks are known not to perform well with respect to traditional machine learning methods such as random forests. In this paper, we show that a carefully designed neural network with random forest structure can have better generalization ability. In fact, this architecture is more powerful than random forests, because the back-propagation algorithm reduces to a more powerful and generalized way of constructing a decision tree. Furthermore, the approach is efficient to train and requires a small constant factor of the number of training examples. This efficiency allows the training of multiple neural networks in order to improve the generalization accuracy. Experimental results on 10 realworld benchmark datasets demonstrate the effectiveness of the proposed enhancements.", "title": "" }, { "docid": "94366591151f18db1551a4a3e4012d95", "text": "As part of the Taste of Computing project, the Exploring Computer Science (ECS) instructional model has been expanded to many high schools in the Chicago Public Schools system. The authors report on initial outcomes showing that students value the ECS course experience, resulting in increased awareness of and interest in the field of computer science. The authors also compare these results by race and gender. The data provide a good basis for exploring the impact of meaningful computer science instruction on students from groups underrepresented in computing; of several hundred students surveyed, nearly half were female, and over half were Hispanic or African American.", "title": "" }, { "docid": "c8e446ab0dbdaf910b5fb98f672a35dc", "text": "MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S ≤ R ≤ S 2−S . Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R ≥ S z−S holds where z is only slightly larger than 2 (e.g., z ≤ 2.1). Our restricted worst case analysis by assuming S z−S ≤ R ≤ S 2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse. Appearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors.", "title": "" }, { "docid": "6140255e69aa292bf8c97c9ef200def7", "text": "Food production requires application of fertilizers containing phosphorus, nitrogen and potassium on agricultural fields in order to sustain crop yields. However modern agriculture is dependent on phosphorus derived from phosphate rock, which is a non-renewable resource and current global reserves may be depleted in 50–100 years. While phosphorus demand is projected to increase, the expected global peak in phosphorus production is predicted to occur around 2030. The exact timing of peak phosphorus production might be disputed, however it is widely acknowledged within the fertilizer industry that the quality of remaining phosphate rock is decreasing and production costs are increasing. Yet future access to phosphorus receives little or no international attention. This paper puts forward the case for including long-term phosphorus scarcity on the priority agenda for global food security. Opportunities for recovering phosphorus and reducing demand are also addressed together with institutional challenges. 2009 Published by Elsevier Ltd.", "title": "" }, { "docid": "5ccda95046b0e5d1cfc345011b1e350d", "text": "Considerable emphasis is currently placed on reducing healthcare-associated infection through improving hand hygiene compliance among healthcare professionals. There is also increasing discussion in the lay media of perceived poor hand hygiene compliance among healthcare staff. Our aim was to report the outcomes of a systematic search for peer-reviewed, published studies - especially clinical trials - that focused on hand hygiene compliance among healthcare professionals. Literature published between December 2009, after publication of the World Health Organization (WHO) hand hygiene guidelines, and February 2014, which was indexed in PubMed and CINAHL on the topic of hand hygiene compliance, was searched. Following examination of relevance and methodology of the 57 publications initially retrieved, 16 clinical trials were finally included in the review. The majority of studies were conducted in the USA and Europe. The intensive care unit emerged as the predominant focus of studies followed by facilities for care of the elderly. The category of healthcare worker most often the focus of the research was the nurse, followed by the healthcare assistant and the doctor. The unit of analysis reported for hand hygiene compliance was 'hand hygiene opportunity'; four studies adopted the 'my five moments for hand hygiene' framework, as set out in the WHO guidelines, whereas other papers focused on unique multimodal strategies of varying design. We concluded that adopting a multimodal approach to hand hygiene improvement intervention strategies, whether guided by the WHO framework or by another tested multimodal framework, results in moderate improvements in hand hygiene compliance.", "title": "" }, { "docid": "c224cc83b4c58001dbbd3e0ea44a768a", "text": "We review the current status of research in dorsal-ventral (D-V) patterning in vertebrates. Emphasis is placed on recent work on Xenopus, which provides a paradigm for vertebrate development based on a rich heritage of experimental embryology. D-V patterning starts much earlier than previously thought, under the influence of a dorsal nuclear -Catenin signal. At mid-blastula two signaling centers are present on the dorsal side: The prospective neuroectoderm expresses bone morphogenetic protein (BMP) antagonists, and the future dorsal endoderm secretes Nodal-related mesoderm-inducing factors. When dorsal mesoderm is formed at gastrula, a cocktail of growth factor antagonists is secreted by the Spemann organizer and further patterns the embryo. A ventral gastrula signaling center opposes the actions of the dorsal organizer, and another set of secreted antagonists is produced ventrally under the control of BMP4. The early dorsal -Catenin signal inhibits BMP expression at the transcriptional level and promotes expression of secreted BMP antagonists in the prospective central nervous system (CNS). In the absence of mesoderm, expression of Chordin and Noggin in ectoderm is required for anterior CNS formation. FGF (fibroblast growth factor) and IGF (insulin-like growth factor) signals are also potent neural inducers. Neural induction by anti-BMPs such as Chordin requires mitogen-activated protein kinase (MAPK) activation mediated by FGF and IGF. These multiple signals can be integrated at the level of Smad1. Phosphorylation by BMP receptor stimulates Smad1 transcriptional activity, whereas phosphorylation by MAPK has the opposite effect. Neural tissue is formed only at very low levels of activity of BMP-transducing Smads, which require the combination of both low BMP levels and high MAPK signals. Many of the molecular players that regulate D-V patterning via regulation of BMP signaling have been conserved between Drosophila and the vertebrates.", "title": "" }, { "docid": "ce87a635c0c3aaa17e7b83d5fb52adce", "text": "We present a novel definition of the reinforcement learning state, actions and reward function that allows a deep Q-network (DQN) to learn to control an optimization hyperparameter. Using Q-learning with experience replay, we train two DQNs to accept a state representation of an objective function as input and output the expected discounted return of rewards, or q-values, connected to the actions of either adjusting the learning rate or leaving it unchanged. The two DQNs learn a policy similar to a line search, but differ in the number of allowed actions. The trained DQNs in combination with a gradient-based update routine form the basis of the Q-gradient descent algorithms. To demonstrate the viability of this framework, we show that the DQN’s q-values associated with optimal action converge and that the Q-gradient descent algorithms outperform gradient descent with an Armijo or nonmonotone line search. Unlike traditional optimization methods, Q-gradient descent can incorporate any objective statistic and by varying the actions we gain insight into the type of learning rate adjustment strategies that are successful for neural network optimization.", "title": "" }, { "docid": "7c9a28889b209832adfbdee93494620d", "text": "Wake-up radios have been a popular transceiver architecture in recent years for battery-powered applications such as wireless body area networks (WBANs) [1], wireless sensor networks (WSNs) [2,3], and even electronic toll collection systems (ETCS) [4]. The most important consideration in implementing a wake-up receiver (WuRX) is low power dissipation while maximizing sensitivity. Because of this requirement of very low power, WuRX are usually designed by a simple RF envelope detector (RFED) consisting of Schottky diodes [1,3] or MOSFETs in the weak inversion region [2] without active filtering or amplification of the input signal. Therefore, the performance of the RFED itself is critical for attaining good sensitivity of the WuRX. Moreover, the poor filtering of the input signal renders the WuRX vulnerable to interferers from nearby terminals with high transmit power such as mobile phones and WiFi devices, and this can result in false wake-ups [1]. Although the RFED has very low power, a false wake-up will increase the power consumption of the wake-up radio as it will enable the power-hungry main transceiver.", "title": "" }, { "docid": "aed5bb8a488215afaf30efe054d22d4b", "text": "OBJECTIVE\nStudies of the neurobiological processes underlying drug addiction primarily have focused on limbic subcortical structures. Here the authors evaluated the role of frontal cortical structures in drug addiction.\n\n\nMETHOD\nAn integrated model of drug addiction that encompasses intoxication, bingeing, withdrawal, and craving is proposed. This model and findings from neuroimaging studies on the behavioral, cognitive, and emotional processes that are at the core of drug addiction were used to analyze the involvement of frontal structures in drug addiction.\n\n\nRESULTS\nThe orbitofrontal cortex and the anterior cingulate gyrus, which are regions neuroanatomically connected with limbic structures, are the frontal cortical areas most frequently implicated in drug addiction. They are activated in addicted subjects during intoxication, craving, and bingeing, and they are deactivated during withdrawal. These regions are also involved in higher-order cognitive and motivational functions, such as the ability to track, update, and modulate the salience of a reinforcer as a function of context and expectation and the ability to control and inhibit prepotent responses.\n\n\nCONCLUSIONS\nThese results imply that addiction connotes cortically regulated cognitive and emotional processes, which result in the overvaluing of drug reinforcers, the undervaluing of alternative reinforcers, and deficits in inhibitory control for drug responses. These changes in addiction, which the authors call I-RISA (impaired response inhibition and salience attribution), expand the traditional concepts of drug dependence that emphasize limbic-regulated responses to pleasure and reward.", "title": "" }, { "docid": "87949c3616f14711fe0eb6f7cc9f95b3", "text": "Three hydroponic systems (aeroponics, aerohydroponics, and deep-water culture) were compared for the production of potato (Solanum tuberosum) seed tubers. Aerohydroponics was designed to improve the root zone environment of aeroponics by maintaining root contact with nutrient solution in the lower part of the beds, while intermittently spraying roots in the upper part. Root vitality, shoot fresh and dry weight, and total leaf area were significantly highest when cv. Superior, a medium early-maturing cultivar, was grown in the aeroponic system. This better plant growth in the aeroponic system was accompanied by rapid changes of solution pH and EC, and early tuberization. However, with cv. Atlantic, a mid-late maturing cultivar, there were no significant differences in shoot weight and leaf area among the hydroponic systems. The first tuberization was observed in aeroponics on 26–30 and 43–53 days after transplanting for cvs Superior and Atlantic, respectively. Tuberization in aerohydroponics and deep-water culture system occurred about 3–4 and 6–8 days later, respectively. The number of tubers produced was greatest in the deep-water culture system, but the total tuber weight per plant was the least in this system. For cv. Atlantic, the number of tubers <30 g weight was higher in aerohydroponics than in aeroponics, whereas there was no difference in the number of tubers >30 g between aerohydroponics and aeroponics. For cv. Superior, there was no difference in the size distribution of tubers between the two aeroponic systems. It could be concluded that deep-water culture system could be used to produce many small tubers (1–5 g) for plant propagation. However, the reduced number of large tubers above 5 g weight in the deep-water culture system, may favor use of either aeroponics or aerohydroponics. These two systems produced a similar number of tubers in each size group for the medium-early season cv. Superior, whereas aerohydroponics produced more tubers than aeroponics for the mid-late cultivar Atlantic.", "title": "" }, { "docid": "4de2c6422d8357e6cb00cce21e703370", "text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.", "title": "" }, { "docid": "b1c0351af515090e418d59a4b553b866", "text": "BACKGROUND\nThe dermatoscopic examination of the nail plate has been recently introduced for the evaluation of pigmented nail lesions. There is, however, no evidence that this technique improves diagnostic accuracy of in situ melanoma.\n\n\nOBJECTIVE\nTo establish and validate patterns for intraoperative dermatoscopy of the nail matrix.\n\n\nMETHODS\nIntraoperative nail matrix dermatoscopy was performed in 100 consecutive bands of longitudinal melanonychia that were excised and submitted to histopathologic examination.\n\n\nRESULTS\nWe identified 4 dermatoscopic patterns: regular gray pattern (hypermelanosis), regular brown pattern (benign melanocytic hyperplasia), regular brown pattern with globules or blotch (melanocytic nevi), and irregular pattern (melanoma).\n\n\nLIMITATIONS\nNail matrix dermatoscopy is an invasive procedure that can not routinely be performed in all cases of melanonychia.\n\n\nCONCLUSION\nThe patterns described present high sensitivity and specificity for intraoperative differential diagnosis of pigmented nail lesions.", "title": "" }, { "docid": "793453bdbd1044309e62736ab8b7f017", "text": "There has been a rapid increase in the number and demand for approved biopharmaceuticals produced from animal cell culture processes over the last few years. In part, this has been due to the efficacy of several humanized monoclonal antibodies that are required at large doses for therapeutic use. There have also been several identifiable advances in animal cell technology that has enabled efficient biomanufacture of these products. Gene vector systems allow high specific protein expression and some minimize the undesirable process of gene silencing that may occur in prolonged culture. Characterization of cellular metabolism and physiology has enabled the design of fed-batch and perfusion bioreactor processes that has allowed a significant improvement in product yield, some of which are now approaching 5 g/L. Many of these processes are now being designed in serum-free and animal-component-free media to ensure that products are not contaminated with the adventitious agents found in bovine serum. There are several areas that can be identified that could lead to further improvement in cell culture systems. This includes the down-regulation of apoptosis to enable prolonged cell survival under potentially adverse conditions. The characterization of the critical parameters of glycosylation should enable process control to reduce the heterogeneity of glycoforms so that production processes are consistent. Further improvement may also be made by the identification of glycoforms with enhanced biological activity to enhance clinical efficacy. The ability to produce the ever-increasing number of biopharmaceuticals by animal cell culture is dependent on sufficient bioreactor capacity in the industry. A recent shortfall in available worldwide culture capacity has encouraged commercial activity in contract manufacturing operations. However, some analysts indicate that this still may not be enough and that future manufacturing demand may exceed production capacity as the number of approved biotherapeutics increases.", "title": "" }, { "docid": "16b08c95aaa4f7db98b00b50cb387014", "text": "Blockchain-based solutions are one of the major areas of research for institutions, particularly in the financial and the government sectors. There is little disagreement that backbone technologies currently used in these sectors are outdated and need an overhaul to conform to the needs of the times. Distributed or decentralized ledgers in the form of blockchains are one of themost discussed potential solutions to the stated problem. We provide a description of permissioned blockchain systems that could be used in creating secure ledgers or timestamped registries. We contend that the blockchain protocol and data should be accessible to end users to provide a higher level of decentralization and transparency and argue that proof ofwork could be effectively used in permissioned blockchains as a means of providing and diversifying security.", "title": "" } ]
scidocsrr
4bc9b2a9cedd00cd8dbc1a54e336c86b
Exploring the Use of Autoencoders for Botnets Traffic Representation
[ { "docid": "2ffb20d66a0d5cb64442c2707b3155c6", "text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.", "title": "" } ]
[ { "docid": "545998c2badee9554045c04983b1d11b", "text": "This paper presents a new control approach for nonlinear network-induced time delay systems by combining online reset control, neural networks, and dynamic Bayesian networks. We use feedback linearization to construct a nominal control for the system then use reset control and a neural network to compensate for errors due to the time delay. Finally, we obtain a stochastic model of the Networked Control System (NCS) using a Dynamic Bayesian Network (DBN) and use it to design a predictive control. We apply our control methodology to a nonlinear inverted pendulum and evaluate its performance through numerical simulations. We also test our approach with real-time experiments on a dc motor-load NCS with wireless communication implemented using a Ubiquitous Sensor Network (USN). Both the simulation and experimental results demonstrate the efficacy of our control methodology.", "title": "" }, { "docid": "0ecded7fad85b79c4c288659339bc18b", "text": "We present an end-to-end supervised based system for detecting malware by analyzing network traffic. The proposed method extracts 972 behavioral features across different protocols and network layers, and refers to different observation resolutions (transaction, session, flow and conversation windows). A feature selection method is then used to identify the most meaningful features and to reduce the data dimensionality to a tractable size. Finally, various supervised methods are evaluated to indicate whether traffic in the network is malicious, to attribute it to known malware “families” and to discover new threats. A comparative experimental study using real network traffic from various environments indicates that the proposed system outperforms existing state-of-the-art rule-based systems, such as Snort and Suricata. In particular, our chronological evaluation shows that many unknown malware incidents could have been detected at least a month before their static rules were introduced to either the Snort or Suricata systems.", "title": "" }, { "docid": "c79be5b8b375a9bced1bfe5c3f9024ce", "text": "Recent technological advances have enabled DNA methylation to be assayed at single-cell resolution. However, current protocols are limited by incomplete CpG coverage and hence methods to predict missing methylation states are critical to enable genome-wide analyses. We report DeepCpG, a computational approach based on deep neural networks to predict methylation states in single cells. We evaluate DeepCpG on single-cell methylation data from five cell types generated using alternative sequencing protocols. DeepCpG yields substantially more accurate predictions than previous methods. Additionally, we show that the model parameters can be interpreted, thereby providing insights into how sequence composition affects methylation variability.", "title": "" }, { "docid": "6fd3f4ab064535d38c01f03c0135826f", "text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.", "title": "" }, { "docid": "457ba37bf69b870db2653b851d271b0b", "text": "This paper presents a unified approach to local trajectory planning and control for the autonomous ground vehicle driving along a rough predefined path. In order to cope with the unpredictably changing environment reactively and reason about the global guidance, we develop an efficient sampling-based model predictive local path generation approach to generate a set of kinematically-feasible trajectories aligning with the reference path. A discrete optimization scheme is developed to select the best path based on a specified objective function, then followed by the velocity profile generation. As for the low-level control, to achieve high performance of control, two degree of freedom control architecture is employed by combining the feedforward control with the feedback control. The simulation results demonstrate the capability of the proposed approach to track the curvature-discontinuous reference path robustly, while avoiding collisions with static obstacles.", "title": "" }, { "docid": "57856c122a6f8a0db8423a1af9378b3e", "text": "Probiotics are defined as live microorganisms, which when administered in adequate amounts, confer a health benefit on the host. Health benefits have mainly been demonstrated for specific probiotic strains of the following genera: Lactobacillus, Bifidobacterium, Saccharomyces, Enterococcus, Streptococcus, Pediococcus, Leuconostoc, Bacillus, Escherichia coli. The human microbiota is getting a lot of attention today and research has already demonstrated that alteration of this microbiota may have far-reaching consequences. One of the possible routes for correcting dysbiosis is by consuming probiotics. The credibility of specific health claims of probiotics and their safety must be established through science-based clinical studies. This overview summarizes the most commonly used probiotic microorganisms and their demonstrated health claims. As probiotic properties have been shown to be strain specific, accurate identification of particular strains is also very important. On the other hand, it is also demonstrated that the use of various probiotics for immunocompromised patients or patients with a leaky gut has also yielded infections, sepsis, fungemia, bacteraemia. Although the vast majority of probiotics that are used today are generally regarded as safe and beneficial for healthy individuals, caution in selecting and monitoring of probiotics for patients is needed and complete consideration of risk-benefit ratio before prescribing is recommended.", "title": "" }, { "docid": "95bbe5d13f3ca5f97d01f2692a9dc77a", "text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.", "title": "" }, { "docid": "1be6aecdc3200ed70ede2d5e96cb43be", "text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.", "title": "" }, { "docid": "6646b5b8b4b9946fb58b4570763904d9", "text": "Nowadays, mobile devices are ubiquitous in people's everyday life and applications on mobile devices are becoming increasingly resource-hungry. However, the resources on mobile devices are limited. Mobile cloud computing addresses the resource scarcity problem of mobile devices by offloading computation and/or data from mobile devices into the cloud. In the converging progress of mobile computing and cloud computing, the cloudlet is an important complement to the client-cloud hierarchy. This paper presents an extensive survey of researches on cloudlet based mobile computing. We first retrospect the evolution of cloudlet based mobile computing. After that, we review the existing works on cloudlet based computation offloading and data offloading. Then we introduce two examples of commercial cloudlet products. At last, we discuss the current situation, the challenges and future directions of this area.", "title": "" }, { "docid": "5a6bfd63fbbe4aea72226c4aa30ac05d", "text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: SotkaE@cofc.edu Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: SotkaE@cofc.edu, tinamariebell@gmail.com Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: megaluropus@gmail.com, stephonyx@gmail.com Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: a.poore@unsw.edu.au", "title": "" }, { "docid": "261ef8b449727b615f8cd5bd458afa91", "text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.", "title": "" }, { "docid": "23df6d913ffcdeda3de8b37977866bb7", "text": "This paper examined the impact of customer relationship management (CRM) elements on customer satisfaction and loyalty. CRM is one of the critical strategies that can be employed by organizations to improve competitive advantage. Four critical CRM elements are measured in this study are behavior of the employees, quality of customer services, relationship development and interaction management. The study was performed at a departmental store in Tehran, Iran. The study employed quantitative approach and base on 300 respondents. Multiple regression analysis is used to examine the relationship of the variables. The finding shows that behavior of the employees is significantly relate and contribute to customer satisfaction and loyalty.", "title": "" }, { "docid": "b311ce7a34d3bdb21678ed765bcd0f0b", "text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.", "title": "" }, { "docid": "96f42b3a653964cffa15d9b3bebf0086", "text": "The brain processes information through many layers of neurons. This deep architecture is representationally powerful1,2,3,4, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made1,5. In machine learning, the backpropagation algorithm1 assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron’s axon and farther downstream. This operation requires a precisely choreographed transport of synaptic weight information, which is thought to be impossible in the brain1,6,7,8,9,10,11,12,13,14. Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by random synaptic weights. We show that a network can learn to extract useful information from signals sent through these random feedback connections. In essence, the network learns to learn. We demonstrate that this new mechanism performs as quickly and accurately as backpropagation on a variety of problems and describe the principles which underlie its function. Our demonstration provides a plausible basis for how a neuron can be adapted using error signals generated at distal locations in the brain, and thus dispels long-held assumptions about the algorithmic constraints on learning in neural circuits. 1 ar X iv :1 41 1. 02 47 v1 [ qbi o. N C ] 2 N ov 2 01 4 Networks in the brain compute via many layers of interconnected neurons15,16. To work properly neurons must adjust their synapses so that the network’s outputs are appropriate for its tasks. A longstanding mystery is how upstream synapses (e.g. the synapse between α and β in Fig. 1a) are adjusted on the basis of downstream errors (e.g. e in Fig. 1a). In artificial intelligence this problem is solved by an algorithm called backpropagation of error1. Backprop works well in real-world applications17,18,19, and networks trained with it can account for cell response properties in some areas of cortex20,21. But it is biologically implausible because it requires that neurons send each other precise information about large numbers of synaptic weights — i.e. it needs weight transport1,6,7,8,12,14,22 (Fig. 1a, b). Specifically, backprop multiplies error signals e by the matrix W T , the transpose of the forward synaptic connections, W (Fig. 1b). This implies that feedback is computed using knowledge of all the synaptic weights W in the forward path. For this reason, current theories of biological learning have turned to simpler schemes such as reinforcement learning23, and “shallow” mechanisms which use errors to adjust only the final layer of a network4,11. But reinforcement learning, which delivers the same reward signal to each neuron, is slow and scales poorly with network size5,13,24. And shallow mechanisms waste the representational power of deep networks3,4,25. Here we describe a new deep-learning algorithm that is as fast and accurate as backprop, but much simpler, avoiding all transport of synaptic weight information. This makes it a mechanism the brain could easily exploit. It is based on three insights: (i) The feedback weights need not be exactly W T . In fact, any matrix B will suffice, so long as on average,", "title": "" }, { "docid": "3417c9c1de65c18fe82dc8cdf46335a4", "text": "Romualdo Pastor-Satorras, Claudio Castellano, 3 Piet Van Mieghem, and Alessandro Vespignani 6 Departament de F́ısica i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Campus Nord B4, 08034 Barcelona, Spain Istituto dei Sistemi Complessi (ISC-CNR), via dei Taurini 19, I-00185 Roma, Italy Dipartimento di Fisica, “Sapienza” Università di Roma, P.le A. Moro 2, I-00185 Roma, Italy Delft University of Technology, Delft, The Netherlands Laboratory for the Modeling of Biological and Socio-technical Systems, Northeastern University, Boston MA 02115 USA Institute for Scientific Interchange Foundation, Turin 10133, Italy", "title": "" }, { "docid": "d685e84f8ddc55f2391a9feffc88889f", "text": "Little is known about how Agile developers and UX designers integrate their work on a day-to-day basis. While accounts in the literature attempt to integrate Agile development and UX design by combining their processes and tools, the contradicting claims found in the accounts complicate extracting advice from such accounts. This paper reports on three ethnographically-informed field studies of the day-today practice of developers and designers in organisational settings. Our results show that integration is achieved in practice through (1) mutual awareness, (2) expectations about acceptable behaviour, (3) negotiating progress and (4) engaging with each other. Successful integration relies on practices that support and maintain these four aspects in the day-to-day work of developers and designers.", "title": "" }, { "docid": "07e93064b1971a32b5c85b251f207348", "text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.", "title": "" }, { "docid": "72a86b52797d61bf631d75cd7109e9d9", "text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.", "title": "" }, { "docid": "e56af4a3a8fbef80493d77b441ee1970", "text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.", "title": "" }, { "docid": "5e360af9f3fa234afe9d2f71d04cc64c", "text": "Personality is an important psychological construct accounting for individual differences in people. To reliably, validly, and efficiently recognize an individual’s personality is a worthwhile goal; however, the traditional ways of personality assessment through self-report inventories or interviews conducted by psychologists are costly and less practical in social media domains, since they need the subjects to take active actions to cooperate. This paper proposes a method of big five personality recognition (PR) from microblog in Chinese language environments with a new machine learning paradigm named label distribution learning (LDL), which has never been previously reported to be used in PR. One hundred and thirteen features are extracted from 994 active Sina Weibo users’ profiles and micro-blogs. Eight LDL algorithms and nine non-trivial conventional machine learning algorithms are adopted to train the big five personality traits prediction models. Experimental results show that two of the proposed LDL approaches outperform the others in predictive ability, and the most predictive one also achieves relatively higher running efficiency among all the algorithms.", "title": "" } ]
scidocsrr
d10b8314ba96815d7e9476c0e9a938ae
Energy Efficient Mobile Cloud Computing Powered by Wireless Energy Transfer
[ { "docid": "10187e22397b1c30b497943764d32c34", "text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.", "title": "" }, { "docid": "0cbd3587fe466a13847e94e29bb11524", "text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?", "title": "" } ]
[ { "docid": "597e00855111c6ccb891c96e28f23585", "text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.", "title": "" }, { "docid": "fdfea6d3a5160c591863351395929a99", "text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.", "title": "" }, { "docid": "cf5128cb4259ea87027ddd00189dc931", "text": "This paper interrogates the currently pervasive discourse of the ‘net generation’ finding the concept of the ‘digital native’ especially problematic, both empirically and conceptually. We draw on a research project of South African higher education students’ access to and use of Information and Communication Technologies (ICTs) to show that age is not a determining factor in students’ digital lives; rather, their familiarity and experience using ICTs is more relevant. We also demonstrate that the notion of a generation of ‘digital natives’ is inaccurate: those with such attributes are effectively a digital elite. Instead of a new net generation growing up to replace an older analogue generation, there is a deepening digital divide in South Africa characterized not by age but by access and opportunity; indeed, digital apartheid is alive and well. We suggest that the possibility for digital democracy does exist in the form of a mobile society which is not age specific, and which is ubiquitous. Finally, we propose redefining the concepts ‘digital’, ‘net’, ‘native’, and ‘generation’ in favour of reclaiming the term ‘digitizen’.", "title": "" }, { "docid": "8e1b6eb4a939c493eff27cf78bab8d47", "text": "Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has a significant impact on the socio-economic lifeline of a country. The Assessment of flood risks facilitates taking appropriate measures to reduce the consequences of flooding. The flood risk assessment requires Big data which are coming from different sources, such as sensors, social media, and organizations. However, these data sources contain various types of uncertainties because of the presence of incomplete and inaccurate information. This paper presents a Belief rule-based expert system (BRBES) which is developed in Big data platform to assess flood risk in real time. The system processes extremely large dataset by integrating BRBES with Apache Spark while a web-based interface has developed allowing the visualization of flood risk in real time. Since the integrated BRBES employs knowledge driven learning mechanism, it has been compared with other data-driven learning mechanisms to determine the reliability in assessing flood risk. The integrated BRBES produces reliable results in comparison to other data-driven approaches. Data for the expert system has been collected by considering different case study areas of Bangladesh to validate the system.", "title": "" }, { "docid": "0a9f37b5a22d4c13cedcff69fc2caf7b", "text": "The Íslendinga sögur – or Sagas of Icelanders – constitute a collection of medieval literature set in Iceland around the late 9th to early 11th centuries, the so-called Saga Age. They purport to describe events during the period around the settlement of Iceland and the generations immediately following and constitute an important element of world literature thanks to their unique narrative style. Although their historicity is a matter of scholarly debate, the narratives contain interwoven and overlapping plots involving thousands of characters and interactions between them. Here we perform a network analysis of the Íslendinga sögur in an attempt to gather quantitative information on interrelationships between characters and to compare saga society to other social networks.", "title": "" }, { "docid": "1c8cd8953ed2c6dc5c95975a0581237a", "text": "We present a point tracking system powered by two deep convolutional neural networks. The first network, MagicPoint, operates on single images and extracts salient 2D points. The extracted points are “SLAM-ready” because they are by design isolated and well-distributed throughout the image. We compare this network against classical point detectors and discover a significant performance gap in the presence of image noise. As transformation estimation is more simple when the detected points are geometrically stable, we designed a second network, MagicWarp, which operates on pairs of point images (outputs of MagicPoint), and estimates the homography that relates the inputs. This transformation engine differs from traditional approaches because it does not use local point descriptors, only point locations. Both networks are trained with simple synthetic data, alleviating the requirement of expensive external camera ground truthing and advanced graphics rendering pipelines. The system is fast and lean, easily running 30+ FPS on a single CPU.", "title": "" }, { "docid": "f46c9848064716097c289ecb08052cad", "text": "This paper compares the performance of Black-Scholes with an artificial neural network (ANN) in pricing European style call options on the FTSE 100 index. It is the first extensive study of the performance of ANNs in pricing UK options, and the first to allow for dividends in the closed-form model. For out-of themoney options, the ANN is clearly superior to Black-Scholes. For in-the-money options, if the sample space is restricted by excluding deep in-the-money and long maturity options (3.4% of total volume), the performance of the ANN is comparable with that of Black-Scholes. The superiority of the ANN is a surprising result, given that European style equity options are the home ground of Black-Scholes, and suggests that ANNs may have an important role to play in pricing other options for which there is either no closed-form model, or the closed-form model is less successful than Black-Scholes for equity options.", "title": "" }, { "docid": "5e94e30719ac09e86aaa50d9ab4ad57b", "text": "Blogs, regularly updated online journals, allow people to quickly and easily create and share online content. Most bloggers write about their everyday lives and generally have a small audience of regular readers. Readers interact with bloggers by contributing comments in response to specific blog posts. Moreover, readers of blogs are often bloggers themselves and acknowledge their favorite blogs by adding them to their blogrolls or linking to them in their posts. This paper presents a study of bloggers’ online and real life relationships in three blog communities: Kuwait Blogs, Dallas/Fort Worth Blogs, and United Arab Emirates Blogs. Through a comparative analysis of the social network structures created by blogrolls and blog comments, we find different characteristics for different kinds of links. Our online survey of the three communities reveals that few of the blogging interactions reflect close offline relationships, and moreover that many online relationships were formed through blogging.", "title": "" }, { "docid": "6e60d6b878c35051ab939a03bdd09574", "text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.", "title": "" }, { "docid": "a80e3d5ee1d158295378671fcc3ea4fb", "text": "We review the task of Sentence Pair Scoring, popular in the literature in various forms — viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attentionbased neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence models.", "title": "" }, { "docid": "bfe45d100d0df1b5dad7c63a7a070359", "text": "AIM\nTo present the regenerative endodontic treatment procedure of a perforated internal root resorption case and its clinical and radiographic findings after 2 years.\n\n\nSUMMARY\nA 14-year-old female patient was referred complaining of moderate pain associated with her maxillary left lateral incisor. After radiographic examination, a perforated internal resorption lesion in the middle third of tooth 22 was detected. Under local anaesthesia and rubber dam isolation, an access cavity was prepared and the root canal was shaped using K-files under copious irrigation with 1% NaOCl, 17% EDTA and distilled water. At the end of the first and second appointments, calcium hydroxide (CH) paste was placed in the root canal using a lentulo. After 3 months, the CH paste was removed using 1% NaOCl and 17% EDTA solutions and bleeding in the root canal was achieved by placing a size 20 K-file into the periapical tissues. Mineral trioxide aggregate was then placed over the blood clot. The access cavity was restored using glass-ionomer cement and resin composite. After 2 years, the tooth was asymptomatic and radiographic examination revealed hard tissue formation in the perforated resorption area and remodelling of the root surface.\n\n\nKEY LEARNING POINTS\nRegenerative endodontic treatment procedures are an alternative approach to treat perforated internal root resorption lesions. Calcium hydroxide was effective as an intracanal medicament in regenerative endodontic treatment procedures.", "title": "" }, { "docid": "88a21d973ec80ee676695c95f6b20545", "text": "Three-dimensional models provide a volumetric representation of space which is important for a variety of robotic applications including flying robots and robots that are equipped with manipulators. In this paper, we present an open-source framework to generate volumetric 3D environment models. Our mapping approach is based on octrees and uses probabilistic occupancy estimation. It explicitly represents not only occupied space, but also free and unknown areas. Furthermore, we propose an octree map compression method that keeps the 3D models compact. Our framework is available as an open-source C++ library and has already been successfully applied in several robotics projects. We present a series of experimental results carried out with real robots and on publicly available real-world datasets. The results demonstrate that our approach is able to update the representation efficiently and models the data consistently while keeping the memory requirement at a minimum.", "title": "" }, { "docid": "d558f980b85bf970a7b57c00df361591", "text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.", "title": "" }, { "docid": "2e3f05ee44b276b51c1b449e4a62af94", "text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "title": "" }, { "docid": "85bc241c03d417099aa155766e6a1421", "text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.", "title": "" }, { "docid": "f31669e97fc655e74e8bb8324031060b", "text": "Being an emerging paradigm for display advertising, RealTime Bidding (RTB) drives the focus of the bidding strategy from context to users’ interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users’ responses from advertisers’ perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.", "title": "" }, { "docid": "e50842fc8438af7fe6ce4b6d9a5439a7", "text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.", "title": "" }, { "docid": "20fa99f56e249d4326a7d840c5cbd9b7", "text": "Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.", "title": "" }, { "docid": "59370193760b0bebaf530ce669e4ef80", "text": "AlGaN/GaN HEMT using field plate and recessed gate for X-band application was developed on SiC substrate. Internal matching circuits were designed to achieve high gain at 8 GHz for the developed device with single chip and four chips combining, respectively. The internally matched 5.52 mm single chip AlGaN/GaN HEMT exhibited 36.5 W CW output power with a power added efficiency (PAE) of 40.1% and power density of 6.6 W/mm at 35 V drain bias voltage (Vds). The device with four chips combining demonstrated a CW over 100 W across the band of 7.7-8.2 GHz, and an maximum CW output power of 119.1 W with PAE of 38.2% at Vds =31.5 V. This is the highest output power for AlGaN/GaN HEMT operated at X-band to the best of our knowledge.", "title": "" }, { "docid": "356c29a56a781074462a107a849c3412", "text": "One of the long-standing challenges in Artificial Intelligence for goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential tasks has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multitask learning problem, they require supervision from large task-specific (expert) networks which require extensive training. We propose a simple yet efficient multi-task learning framework which solves multiple goal-directed tasks in an online or active learning setup without the need for expert supervision.", "title": "" } ]
scidocsrr
aea48d17b29d7ab2d782d1f532d4eb32
Solving Single-digit Sudoku Subproblems
[ { "docid": "87a6fd003dd6e23f27e791c9de8b8ba6", "text": "The well-known travelling salesman problem is the following: \" A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?\" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.", "title": "" }, { "docid": "38506c89b32c7c82d45040fd99c36986", "text": "We provide a simple linear time transformation from a direct ed or undirected graph with labeled edges to an unlabeled digraph, such that paths in the input graph in which no two consecutive edges have the same label correspond to paths in the transformed graph and vice v ersa. Using this transformation, we provide efficient algorithms for finding paths and cycles with no two consecuti ve equal labels. We also consider related problems where the paths and cycles are required to be simple; we find ef ficient algorithms for the undirected case of these problems but show the directed case to be NP-complete. We app ly our path and cycle finding algorithms in a program for generating and solving Sudoku puzzles, and show experimentally that they lead to effective puzzlesolving rules that may also be of interest to human Sudoku puz zle solvers.", "title": "" } ]
[ { "docid": "40bd8351735f780ba104fa63383002fe", "text": "M a y / J u n e 2 0 0 0 I E E E S O F T W A R E 37 between the user-requirements specification and the software-requirements specification, mandating complete documentation of each according to various rules. Other cases emphasize this distinction less. For instance, some groups at Microsoft argue that the difficulty of keeping a technical specification consistent with the program is more trouble than the benefit merits.2 We can find a wide range of views in industry literature and from the many organizations that write software. Is it possible to clarify these various artifacts and study their properties, given the wide variations in the use of terms and the many different kinds of software being written? Our aim is to provide a framework for talking about key artifacts, their attributes, and relationships at a general level, but precisely enough that we can rigorously analyze substantive properties.", "title": "" }, { "docid": "60f31d60213abe65faec3eb69edb1eea", "text": "In this paper, a novel multi-layer four-way out-of-phase power divider based on substrate integrated waveguide (SIW) is proposed. The four-way power division is realized by 3-D mode coupling; vertical partitioning of a SIW followed by lateral coupling to two half-mode SIW. The measurement results show the excellent insertion loss (S<inf>21</inf>, S<inf>31</inf>, S<inf>41</inf>, S<inf>51</inf>: −7.0 ± 0.5 dB) and input return loss (S<inf>11</inf>: −10 dB) in X-band (7.63 GHz ∼ 11.12 GHz). We expect that the proposed power divider play an important role for the integration of compact multi-way SIW circuits.", "title": "" }, { "docid": "7881f99465004a45f3089b0ec23925e0", "text": "In recent decades, extensive studies from diverse disciplines have focused on children's developmental awareness of different gender roles and the relationships between genders. Among these studies, researchers agree that children's picture books have an increasingly significant place in children's development because these books are a widely available cultural resource, offering young children a multitude of opportunities to gain information, become familiar with the printed pictures, be entertained, and experience perspectives other than their own. In such books, males are habitually described as active and domineering, while females rarely reveal their identities and very frequently are represented as meek and mild. This valuable venue for children's gender development thus unfortunately reflects engrained societal attitudes and biases in the available choices and expectations assigned to different genders. This discriminatory portrayal in many children's picture books also runs the risk of leading children toward a misrepresented and misguided realization of their true potential in their expanding world.", "title": "" }, { "docid": "333bffc73983bc159248420d76afc7e6", "text": "In this paper we study approximate landmark-based methods for point-to-point distance estimation in very large networks. These methods involve selecting a subset of nodes as landmarks and computing offline the distances from each node in the graph to those landmarks. At runtime, when the distance between a pair of nodes is needed, it can be estimated quickly by combining the precomputed distances. We prove that selecting the optimal set of landmarks is an NP-hard problem, and thus heuristic solutions need to be employed. We therefore explore theoretical insights to devise a variety of simple methods that scale well in very large networks. The efficiency of the suggested techniques is tested experimentally using five real-world graphs having millions of edges. While theoretical bounds support the claim that random landmarks work well in practice, our extensive experimentation shows that smart landmark selection can yield dramatically more accurate results: for a given target accuracy, our methods require as much as 250 times less space than selecting landmarks at random. In addition, we demonstrate that at a very small accuracy loss our techniques are several orders of magnitude faster than the state-of-the-art exact methods. Finally, we study an application of our methods to the task of social search in large graphs.", "title": "" }, { "docid": "8b0a90d4f31caffb997aced79c59e50c", "text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the", "title": "" }, { "docid": "5868ec5c17bf7349166ccd0600cc6b07", "text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.", "title": "" }, { "docid": "0d38949c93a7b86a0245a7e5bfe89114", "text": "Software Defined Radio (SDR) is a flexible architecture which can be configured to adapt various wireless standards, waveforms, frequency bands, bandwidths, and modes of operations. This paper presents a detailed survey of the existing hardware and software platform for SDRs. It also presents prototype system for designing and testing of software defined radios in MATLAB/Simulink and briefly discusses the salient functions of the prototype system for Cognitive Radio (CR). A prototype system for wireless personal area network is built and interfaced with a Universal Software Radio Peripheral-2 (USRP2) main-board and RFX2400 daughter board from Ettus Research LLC. The philosophy behind the prototype is to do all waveform-specific processing such as channel coding, modulation, filtering etc. on a host (PC) and general purpose high-speed operations like digital up and down conversion, decimation and interpolation etc. inside FPGA on an USRP2. MATLAB has a rich family of toolboxes that allows building software-defined and cognitive radio to explore various spectrum sensing, prediction and management techniques.", "title": "" }, { "docid": "6fb50b6f34358cf3229bd7645bf42dcd", "text": "With the in-depth study of sentiment analysis research, finer-grained opinion mining, which aims to detect opinions on different review features as opposed to the whole review level, has been receiving more and more attention in the sentiment analysis research community recently. Most of existing approaches rely mainly on the template extraction to identify the explicit relatedness between product feature and opinion terms, which is insufficient to detect the implicit review features and mine the hidden sentiment association in reviews, which satisfies (1) the review features are not appear explicit in the review sentences; (2) it can be deduced by the opinion words in its context. From an information theoretic point of view, this paper proposed an iterative reinforcement framework based on the improved information bottleneck algorithm to address such problem. More specifically, the approach clusters product features and opinion words simultaneously and iteratively by fusing both their semantic information and co-occurrence information. The experimental results demonstrate that our approach outperforms the template extraction based approaches.", "title": "" }, { "docid": "0c34e8355f1635b3679159abd0a82806", "text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.", "title": "" }, { "docid": "913ea886485fae9b567146532ca458ac", "text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the", "title": "" }, { "docid": "758bc9b5e633d59afb155650239591a9", "text": "A growing body of works address automated mining of biochemical knowledge from digital repositories of scientific literature, such as MEDLINE. Some of these works use abstracts as the unit of text from which to extract facts. Others use sentences for this purpose, while still others use phrases. Here we compare abstracts, sentences, and phrases in MEDLINE using the standard information retrieval performance measures of recall, precision, and effectiveness, for the task of mining interactions among biochemical terms based on term co-occurrence. Results show statistically significant differences that can impact the choice of text unit.", "title": "" }, { "docid": "19d29667e1632ff6f0a7446de22cdb84", "text": "Chronic kidney disease (CKD) is defined by persistent urine abnormalities, structural abnormalities or impaired excretory renal function suggestive of a loss of functional nephrons. The majority of patients with CKD are at risk of accelerated cardiovascular disease and death. For those who progress to end-stage renal disease, the limited accessibility to renal replacement therapy is a problem in many parts of the world. Risk factors for the development and progression of CKD include low nephron number at birth, nephron loss due to increasing age and acute or chronic kidney injuries caused by toxic exposures or diseases (for example, obesity and type 2 diabetes mellitus). The management of patients with CKD is focused on early detection or prevention, treatment of the underlying cause (if possible) to curb progression and attention to secondary processes that contribute to ongoing nephron loss. Blood pressure control, inhibition of the renin–angiotensin system and disease-specific interventions are the cornerstones of therapy. CKD complications such as anaemia, metabolic acidosis and secondary hyperparathyroidism affect cardiovascular health and quality of life, and require diagnosis and treatment.", "title": "" }, { "docid": "6724af38a637d61ccc2a4ad8119c6e1a", "text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified", "title": "" }, { "docid": "6cf3f0b1cb7a687d0c04dc91c574cda8", "text": "In recent years, crowdsourcing has become essential in a wide range of Web applications. One of the biggest challenges of crowdsourcing is the quality of crowd answers as workers have wide-ranging levels of expertise and the worker community may contain faulty workers. Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required. Validation is typically conducted by experts, whose availability is limited and who incur high costs. Therefore, we develop a probabilistic model that helps to identify the most beneficial validation questions in terms of both, improvement of result correctness and detection of faulty workers. Our approach allows us to guide the expert's work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 50% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 20\\% of the questions.", "title": "" }, { "docid": "13ac4474f01136b2603f2b7ee9eedf19", "text": "Teamwork is best achieved when members of the team understand one another. Human-robot collaboration poses a particular challenge to this goal due to the differences between individual team members, both mentally/computationally and physically. One way in which this challenge can be addressed is by developing explicit models of human teammates. Here, we discuss, compare and contrast the many techniques available for modeling human cognition and behavior, and evaluate their benefits and drawbacks in the context of human-robot collaboration.", "title": "" }, { "docid": "164fca8833981d037f861aada01d5f7f", "text": "Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n) memory and O(n √ n) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.", "title": "" }, { "docid": "5b17c5637af104b1f20ff1ca9ce9c700", "text": "According to the traditional understanding of cerebrospinal fluid (CSF) physiology, the majority of CSF is produced by the choroid plexus, circulates through the ventricles, the cisterns, and the subarachnoid space to be absorbed into the blood by the arachnoid villi. This review surveys key developments leading to the traditional concept. Challenging this concept are novel insights utilizing molecular and cellular biology as well as neuroimaging, which indicate that CSF physiology may be much more complex than previously believed. The CSF circulation comprises not only a directed flow of CSF, but in addition a pulsatile to and fro movement throughout the entire brain with local fluid exchange between blood, interstitial fluid, and CSF. Astrocytes, aquaporins, and other membrane transporters are key elements in brain water and CSF homeostasis. A continuous bidirectional fluid exchange at the blood brain barrier produces flow rates, which exceed the choroidal CSF production rate by far. The CSF circulation around blood vessels penetrating from the subarachnoid space into the Virchow Robin spaces provides both a drainage pathway for the clearance of waste molecules from the brain and a site for the interaction of the systemic immune system with that of the brain. Important physiological functions, for example the regeneration of the brain during sleep, may depend on CSF circulation.", "title": "" }, { "docid": "7a8a98b91680cbc63594cd898c3052c8", "text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.", "title": "" }, { "docid": "a077507e8d2bde5bb326873413b5bd99", "text": "Encryption is widely used across the internet to secure communications and ensure that information cannot be intercepted and read by a third party. However, encryption also allows cybercriminals to hide their messages and carry out successful malware attacks while avoiding detection. Further aiding criminals is the fact that web browsers display a green lock symbol in the URL bar when a connection to a website is encrypted. This symbol gives a false sense of security to users, who are in turn more likely to fall victim to phishing attacks. The risk of encrypted traffic means that information security researchers must explore new techniques to detect, classify, and take countermeasures against malicious traffic. So far there exists no approach for TLS detection in the wild. In this paper, we propose a method for identifying malicious use of web certificates using deep neural networks. Our system uses the content of TLS certificates to successfully identify legitimate certificates as well as malicious patterns used by attackers. The results show that our system is capable of identifying malware certificates with an accuracy of 94.87% and phishing certificates with an accuracy of 88.64%.", "title": "" }, { "docid": "e7b2956529e0a0a29c9abaf8bb044a6c", "text": "Information extraction studies have been conducted to improve the efficiency ansd accuracy of information retrieval. We developed information extraction techniques to extract name of company, period of document, currency, revenue, and number of employee information from financial report documents automatically. Different with other works, we applied a multi-strategy approach for developing extraction techniques. We separated information based on its similar characteristics before designing extraction techniques. We assumed that the difference of characteristics owned by each information induces the difference of strategy applied. First strategy is constructing extraction techniques using rule-based extraction method on information, which has good regularity on orthographic and layout features such as name of company, period of document and currency. Second strategy is applying machine learning-based extraction method on information, which has rich contextual and list look-up features such as revenue and number of employee. On the first strategy, rule patterns are defined by combining orthographic, layout, and limited contextual features. Defined rule patterns succeed to extract information and gain precision, recall, and F1-measure more than 0.98. On the second strategy, we conducted extraction task as classification task. First, we built classification models using Naive Bayes and Support Vector Machines algorithms. Then, we extracted the most informative features to train the classification models. The best classification model is used for extraction task. Contextual and list look-up features play important role in improving extraction performance. Second strategy succeed to extract revenue and number of employee information and gains precision, recall, and F-1 measure more than 0.93.", "title": "" } ]
scidocsrr
b458ce1c4b32894522418d88521b0413
Using Smartphones to Detect Car Accidents and Provide Situational Awareness to Emergency Responders
[ { "docid": "8718d91f37d12b8ff7658723a937ea84", "text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.", "title": "" } ]
[ { "docid": "f850321173db137674eb74a0dd2afc30", "text": "The relational data model has been dominant and widely used since 1970. However, as the need to deal with big data grows, new data models, such as Hadoop and NoSQL, were developed to address the limitation of the traditional relational data model. As a result, determining which data model is suitable for applications has become a challenge. The purpose of this paper is to provide insight into choosing the suitable data model by conducting a benchmark using Yahoo! Cloud Serving Benchmark (YCSB) on three different database systems: (1) MySQL for relational data model, (2) MongoDB for NoSQL data model, and (3) HBase for Hadoop framework. The benchmark was conducted by running four different workloads. Each workload is executed using a different increasing operation and thread count, while observing how their change respectively affects throughput, latency, and runtime.", "title": "" }, { "docid": "497fdaf295df72238f9ec0cb879b6a48", "text": "A vehicle or fleet management system is implemented for tracking the movement of the vehicle at any time from any location. This proposed system helps in real time tracking of the vehicle using a smart phone application. This method is easy and efficient when compared to other implementations. In emerging technology of developing IOT (Internet of Things) the generic 8 bit/16 bit micro controllers are replaced by 32bit micro controllers in the embedded systems. This has many advantages like use of 32bit micro controller’s scalability, reusability and faster execution speed. Implementation of RTOS is very much necessary for having a real time system. RTOS features are application portability, reusability, more efficient use of system resources. The proposed system uses a 32bit ARM7 based microcontroller with an embedded Real Time Operating System (RTOS).The vehicle unit application is written on FreeRTOS. The peripheral drivers like UART, External interrupt are developed for RTOS aware environment. The vehicle unit consists of a GPS/GPRS module where the position of the vehicle is got from the Global Positioning System (GPS) and the General Packet Radio Service (GPRS) is used to update the timely information of the vehicle position. The vehicle unit updates the location to the Fleet management application on the web server. The vehicle management is a java based web application integrated with MySQL Database. The web application in the proposed system is based on OpenGTS open source vehicle tracking application. A GoTrack Android application is configured to work with web application. The smart phone application also provides a separate login for administrator to add, edit and remove the vehicles on the fleet management system. The users and administrators can continuously monitor the vehicle using a smart phone application.", "title": "" }, { "docid": "92684148cd7d2a6a21657918015343b0", "text": "Radiative wireless power transfer (WPT) is a promising technology to provide cost-effective and real-time power supplies to wireless devices. Although radiative WPT shares many similar characteristics with the extensively studied wireless information transfer or communication, they also differ significantly in terms of design objectives, transmitter/receiver architectures and hardware constraints, and so on. In this paper, we first give an overview on the various WPT technologies, the historical development of the radiative WPT technology and the main challenges in designing contemporary radiative WPT systems. Then, we focus on the state-of-the-art communication and signal processing techniques that can be applied to tackle these challenges. Topics discussed include energy harvester modeling, energy beamforming for WPT, channel acquisition, power region characterization in multi-user WPT, waveform design with linear and non-linear energy receiver model, safety and health issues of WPT, massive multiple-input multiple-output and millimeter wave enabled WPT, wireless charging control, and wireless power and communication systems co-design. We also point out directions that are promising for future research.", "title": "" }, { "docid": "3bb9fc6e09c9ce13252a04d6978d1bfc", "text": "Recently, sparse coding has been successfully applied in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on sparse coding. We first analyze the benefits of using sparse coding in visual tracking and then categorize these methods into appearance modeling based on sparse coding (AMSC) and target searching based on sparse representation (TSSR) as well as their combination. For each categorization, we introduce the basic framework and subsequent improvements with emphasis on their advantages and disadvantages. Finally, we conduct extensive experiments to compare the representative methods on a total of 20 test sequences. The experimental results indicate that: (1) AMSC methods significantly outperform TSSR methods. (2) For AMSC methods, both discriminative dictionary and spatial order reserved pooling operators are important for achieving high tracking accuracy. (3) For TSSR methods, the widely used identity pixel basis will degrade the performance when the target or candidate images are not aligned well or severe occlusion occurs. (4) For TSSR methods, ‘1 norm minimization is not necessary. In contrast, ‘2 norm minimization can obtain comparable performance but with lower computational cost. The open questions and future research topics are also discussed. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "50268ed4eb8f14966d9d0ec32b01429f", "text": "Women's empowerment is an important goal in achieving sustainable development worldwide. Offering access to microfinance services to women is one way to increase women's empowerment. However, empirical evidence provides mixed results with respect to its effectiveness. We reviewed previous research on the impact of microfinance services on different aspects of women's empowerment. We propose a Three-Dimensional Model of Women's Empowerment to integrate previous findings and to gain a deeper understanding of women's empowerment in the field of microfinance services. This model proposes that women's empowerment can take place on three distinct dimensions: (1) the micro-level, referring to an individuals' personal beliefs as well as actions, where personal empowerment can be observed (2) the meso-level, referring to beliefs as well as actions in relation to relevant others, where relational empowerment can be observed and (3) the macro-level, referring to outcomes in the broader, societal context where societal empowerment can be observed. Importantly, we propose that time and culture are important factors that influence women's empowerment. We suggest that the time lag between an intervention and its evaluation may influence when empowerment effects on the different dimensions occur and that the type of intervention influences the sequence in which the three dimensions can be observed. We suggest that cultures may differ with respect to which components of empowerment are considered indicators of empowerment and how women's position in society may influence the development of women's empowerment. We propose that a Three-Dimensional Model of Women's Empowerment should guide future programs in designing, implementing, and evaluating their interventions. As such our analysis offers two main practical implications. First, based on the model we suggest that future research should differentiate between the three dimensions of women's empowerment to increase our understanding of women's empowerment and to facilitate comparisons of results across studies and cultures. Second, we suggest that program designers should specify how an intervention should stimulate which dimension(s) of women's empowerment. We hope that this model inspires longitudinal and cross-cultural research to examine the development of women's empowerment on the personal, relational, and societal dimension.", "title": "" }, { "docid": "32acba3e072e0113759278c57ee2aee2", "text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.", "title": "" }, { "docid": "2ecd0bf132b3b77dc1625ef8d09c925b", "text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.", "title": "" }, { "docid": "1f1158ad55dc8a494d9350c5a5aab2f2", "text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).", "title": "" }, { "docid": "83e3ce2b70e1f06073fd0a476bf04ff7", "text": "Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today's society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "21393a1c52b74517336ef3e08dc4d730", "text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.", "title": "" }, { "docid": "22eb9b1de056d03d15c0a3774a898cfd", "text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.", "title": "" }, { "docid": "e472a8e75ddf72549aeb255aa3d6fb79", "text": "In the presence of normal sensory and motor capacity, intelligent behavior is widely acknowledged to develop from the interaction of short-and long-term memory. While the behavioral, cellular, and molecular underpinnings of the long-term memory process have long been associated with the hippocampal formation, and this structure has become a major model system for the study of memory, the neural substrates of specific short-term memory functions have more and more become identified with prefrontal cortical areas (Goldman-Rakic, 1987; Fuster, 1989). The special nature of working memory was first identified in studies of human cognition and modern neuro-biological methods have identified a specific population of neurons, patterns of their intrinsic and extrinsic circuitry, and signaling molecules that are engaged in this process in animals. In this article, I will first define key features of working memory and then descdbe its biological basis in primates. Distinctive Features of a Working Memory System Working memory is the term applied to the type of memory that is active and relevant only for a short period of time, usually on the scale of seconds. A common example of working memory is keeping in mind a newly read phone number until it is dialed and then immediately forgotten. This process has been captu red by the analogy to a mental sketch pad (Baddeley, 1986) an~l is clearly different from the permanent inscription on neuronal circuitry due to learning. The criterion-useful or relevant only transiently distinguishes working memory from the processes that have been variously termed semantic (Tulving, 1972) or procedural (Squire and Cohen, 1984) memory, processes that can be considered associative in the traditional sense, i.e., information acquired by the repeated contiguity between stimuli and responses and/or consequences. If semantic and procedural memory are the processes by which stimuli and events acquire archival permanence , working memory is the process for the retrieval and proper utilization of this acquired knowledge. In this context, the contents of working memory are as much on the output side of long-term storage sites as they are an important source of input to those sites. Considerable evidence is now at hand to demonstrate that the brain obeys the distinction between working and other forms of memory , and that the prefrontal cortex has a preeminent role mainly in the former (Goldman.Rakic, 1987). However, memory-guided behavior obviously reflects the operation of a widely distributed system of brain structures and psychological functions, and understanding …", "title": "" }, { "docid": "4f186e992cd7d5eadb2c34c0f26f4416", "text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …", "title": "" }, { "docid": "739db4358ac89d375da0ed005f4699ad", "text": "All doctors have encountered patients whose symptoms they cannot explain. These individuals frequently provoke despair and disillusionment. Many doctors make a link between inexplicable physical symptoms and assumed psychiatric ill­ ness. An array of adjectives in medicine apply to symptoms without established organic basis – ‘supratentorial’, ‘psychosomatic’, ‘functional’ – and these are sometimes used without reference to their real meaning. In psychiatry, such symptoms fall under the umbrella of the somatoform disorders, which includes a broad range of diagnoses. Conversion disorder is just one of these. Its meaning is not always well understood and it is often confused with somatisation disorder.† Our aim here is to clarify the notion of a conversion disorder (and the differences between conversion and other somatoform disorders) and to discuss prevalence, aetiology, management and prognosis.", "title": "" }, { "docid": "39958f4825796d62e7a5935d04d5175d", "text": "This paper presents a wireless system which enables real-time health monitoring of multiple patient(s). In health care centers patient's data such asheart rate needs to be constantly monitored. The proposed system monitors the heart rate and other such data of patient's body. For example heart rate is measured through a Photoplethysmograph. A transmitting module is attached which continuously transmits the encoded serial data using Zigbee module. A receiver unit is placed in doctor's cabin, which receives and decodes the data and continuously displays it on a User interface visible on PC/Laptop. Thus doctor can observe and monitor many patients at the same time. System also continuously monitors the patient(s) data and in case of any potential irregularities, in the condition of a patient, the alarm system connected to the system gives an audio-visual warning signal that the patient of a particular room needs immediate attention. In case, the doctor is not in his chamber, the GSM modem connected to the system also sends a message to all the doctors of that unit giving the room number of the patient who needs immediate care.", "title": "" }, { "docid": "7c86594614a6bd434ee4e749eb661cee", "text": "The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed. John R. Anderson is a cognitive scientist with an interest in cognitive architectures and intelligent tutoring systems; he is a Professor of Psychology and Computer Science at Carnegie Mellon University. Michael Matessa is a graduate student studying cognitive psychology at Carnegie Mellon University; his interests include cognitive architectures and modeling the acquisition of information from the environment. Christian Lebiere is a computer scientist with an interest in intelligent architectures; he is a Research Programmer in the Department of Psycholo and a graduate student in the School of Computer Science at Carnegie Me1 By on University. 440 ANDERSON, MATESSA, LEBIERE", "title": "" }, { "docid": "42a0e0ab1ae2b190c913e69367b85001", "text": "One of the most challenging problems facing network operators today is network attacks identification due to extensive number of vulnerabilities in computer systems and creativity of attackers. To address this problem, we present a deep learning approach for intrusion detection systems. Our approach uses Deep Auto-Encoder (DAE) as one of the most well-known deep learning models. The proposed DAE model is trained in a greedy layer-wise fashion in order to avoid overfitting and local optima. The experimental results on the KDD-CUP'99 dataset show that our approach provides substantial improvement over other deep learning-based approaches in terms of accuracy, detection rate and false alarm rate.", "title": "" }, { "docid": "1bdf1bfe81bf6f947df2254ae0d34227", "text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.", "title": "" } ]
scidocsrr
912ce990055ec29d0da81f515d867cc3
What drives mobile commerce?: An empirical evaluation of the revised technology acceptance model
[ { "docid": "0209132c7623c540c125a222552f33ac", "text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper.  2002 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "a5ed1ebf973e3ed7ea106e55795e3249", "text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.", "title": "" }, { "docid": "c99d2914a5da4bb66ab2d3c335e3dc3b", "text": "A traditional paper-based passport contains a MachineReadable Zone (MRZ) and a Visual Inspection Zone (VIZ). The MRZ has two lines of the holder’s personal data, some document data, and verification characters encoded using the Optical Character Recognition font B (OCRB). The encoded data includes the holder’s name, date of birth, and other identifying information for the holder or the document. The VIZ contains the holder’s photo and signature, usually on the data page. However, the MRZ and VIZ can be easily duplicated with normal document reproduction technology to produce a fake passport which can pass traditional verification. Neither of these features actively verify the holder’s identity; nor do they bind the holder’s identity to the document. A passport also contains pages for stamps of visas and of country entry and exit dates, which can be easily altered to produce fake permissions and travel records. The electronic passport, supporting authentication using secure credentials on a tamper-resistant chip, is an attempt to improve on the security of the paper-based passport at minimum cost. This paper surveys the security mechanisms built into the firstgeneration of authentication mechanisms and compares them with second-generation passports. It analyzes and describes the cryptographic protocols used in Basic Access Control (BAC) and Extended Access Control (EAC).", "title": "" }, { "docid": "9747e2be285a5739bd7ee3b074a20ffc", "text": "While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria.", "title": "" }, { "docid": "bf563ecfc0dbb9a8a1b20356bde3dcad", "text": "This paper presents a parallel architecture of an QR decomposition systolic array based on the Givens rotations algorithm on FPGA. The proposed architecture adopts a direct mapping by 21 fixed-point CORDIC-based process units that can compute the QR decomposition for an 4×4 real matrix. In order to achieve a comprehensive resource and performance evaluation, the computational error analysis, the resource utilized, and speed achieved on Virtex5 XC5VTX150T FPGA, are evaluated with the different precision of the intermediate word lengthes. The evaluation results show that 1) the proposed systolic array satisfies 99.9% correct 4×4 QR decomposition for the 2-13 accuracy requirement when the word length of the data path is lager than 25-bit, 2) occupies about 2, 810 (13%) slices, and achieves about 2.06 M/sec updates by running at the maximum frequency 111 MHz.", "title": "" }, { "docid": "1d3cfb2e17360dac69705760b1ee7335", "text": "Anaerobic and aerobic-anaerobic threshold (4 mmol/l lactate), as well as maximal capacity, were determined in seven cross country skiers of national level. All of them ran in a treadmill exercise for at least 30 min at constant heart rates as well as at constant running speed, both as previously determined for the aerobic-anaerobic threshold. During the exercise performed with a constant speed, lactate concentration initially rose to values of nearly 4 mmol/l and then remained essentially constant during the rest of the exercise. Heart rate displayed a slight but permanent increase and was on the average above 170 beats/min. A new arrangement of concepts for the anaerobic and aerobic-anaerobic threshold (as derived from energy metabolism) is suggested, that will make possible the determination of optimal work load intensities during endurance training by regulating heart rate.", "title": "" }, { "docid": "dc4d11c0478872f3882946580bb10572", "text": "An increasing number of neural implantable devices will become available in the near future due to advances in neural engineering. This discipline holds the potential to improve many patients' lives dramatically by offering improved-and in some cases entirely new-forms of rehabilitation for conditions ranging from missing limbs to degenerative cognitive diseases. The use of standard engineering practices, medical trials, and neuroethical evaluations during the design process can create systems that are safe and that follow ethical guidelines; unfortunately, none of these disciplines currently ensure that neural devices are robust against adversarial entities trying to exploit these devices to alter, block, or eavesdrop on neural signals. The authors define \"neurosecurity\"-a version of computer science security principles and methods applied to neural engineering-and discuss why neurosecurity should be a critical consideration in the design of future neural devices.", "title": "" }, { "docid": "e54a0387984553346cf718a6fbe72452", "text": "Learning distributed representations for relation instances is a central technique in downstream NLP applications. In order to address semantic modeling of relational patterns, this paper constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset (Zeichner et al., 2012). In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, and GRU for composing distributed representations of relational patterns. We also present Gated Additive Composition, which is an enhancement of additive composition with the gating mechanism. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.", "title": "" }, { "docid": "bdaa430fe9c0de23f9f1d7efa60d04e5", "text": "BACKGROUND\nChronic thromboembolic pulmonary hypertension (CTPH) is associated with considerable morbidity and mortality. Its incidence after pulmonary embolism and associated risk factors are not well documented.\n\n\nMETHODS\nWe conducted a prospective, long-term, follow-up study to assess the incidence of symptomatic CTPH in consecutive patients with an acute episode of pulmonary embolism but without prior venous thromboembolism. Patients with unexplained persistent dyspnea during follow-up underwent transthoracic echocardiography and, if supportive findings were present, ventilation-perfusion lung scanning and pulmonary angiography. CTPH was considered to be present if systolic and mean pulmonary-artery pressures exceeded 40 mm Hg and 25 mm Hg, respectively; pulmonary-capillary wedge pressure was normal; and there was angiographic evidence of disease.\n\n\nRESULTS\nThe cumulative incidence of symptomatic CTPH was 1.0 percent (95 percent confidence interval, 0.0 to 2.4) at six months, 3.1 percent (95 percent confidence interval, 0.7 to 5.5) at one year, and 3.8 percent (95 percent confidence interval, 1.1 to 6.5) at two years. No cases occurred after two years among the patients with more than two years of follow-up data. The following increased the risk of CTPH: a previous pulmonary embolism (odds ratio, 19.0), younger age (odds ratio, 1.79 per decade), a larger perfusion defect (odds ratio, 2.22 per decile decrement in perfusion), and idiopathic pulmonary embolism at presentation (odds ratio, 5.70).\n\n\nCONCLUSIONS\nCTPH is a relatively common, serious complication of pulmonary embolism. Diagnostic and therapeutic strategies for the early identification and prevention of CTPH are needed.", "title": "" }, { "docid": "2a13609a94050c4477d94cf0d89cbdd3", "text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.", "title": "" }, { "docid": "a93833a6ad41bdc5011a992509e77c9a", "text": "We present the implementation of a largevocabulary continuous speech recognition (LVCSR) system on NVIDIA’s Tegra K1 hyprid GPU-CPU embedded platform. The system is trained on a standard 1000hour corpus, LibriSpeech, features a trigram WFST-based language model, and achieves state-of-the-art recognition accuracy. The fact that the system is realtime-able and consumes less than 7.5 watts peak makes the system perfectly suitable for fast, but precise, offline spoken dialog applications, such as in robotics, portable gaming devices, or in-car systems.", "title": "" }, { "docid": "7774017a3468e3e390753ebbe98af4d0", "text": "We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.", "title": "" }, { "docid": "3e0741fb69ee9bdd3cc455577aab4409", "text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.", "title": "" }, { "docid": "6f5b3f2d2ebb46a993124242af8a50b8", "text": "We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.", "title": "" }, { "docid": "5e04372f08336da5b8ab4d41d69d3533", "text": "Purpose – This research aims at investigating the role of certain factors in organizational culture in the success of knowledge sharing. Such factors as interpersonal trust, communication between staff, information systems, rewards and organization structure play an important role in defining the relationships between staff and in turn, providing possibilities to break obstacles to knowledge sharing. This research is intended to contribute in helping businesses understand the essential role of organizational culture in nourishing knowledge and spreading it in order to become leaders in utilizing their know-how and enjoying prosperity thereafter. Design/methodology/approach – The conclusions of this study are based on interpreting the results of a survey and a number of interviews with staff from various organizations in Bahrain from the public and private sectors. Findings – The research findings indicate that trust, communication, information systems, rewards and organization structure are positively related to knowledge sharing in organizations. Research limitations/implications – The authors believe that further research is required to address governmental sector institutions, where organizational politics dominate a role in hoarding knowledge, through such methods as case studies and observation. Originality/value – Previous research indicated that the Bahraini society is influenced by traditions of household, tribe, and especially religion of the Arab and Islamic world. These factors define people’s beliefs and behaviours, and thus exercise strong influence in the performance of business organizations. This study is motivated by the desire to explore the role of the national organizational culture on knowledge sharing, which may be different from previous studies conducted abroad.", "title": "" }, { "docid": "fcbf97bfbcf63ee76f588a05f82de11e", "text": "The Deliberation without Attention (DWA) effect refers to apparent improvements in decision-making following a period of distraction. It has been presented as evidence for beneficial unconscious cognitive processes. We identify two major concerns with this claim: first, as these demonstrations typically involve subjective preferences, the effects of distraction cannot be objectively assessed as beneficial; second, there is no direct evidence that the DWA manipulation promotes unconscious decision processes. We describe two tasks based on the DWA paradigm in which we found no evidence that the distraction manipulation led to decision processes that are subjectively unconscious, nor that it reduced the influence of presentation order upon performance. Crucially, we found that a lack of awareness of decision process was associated with poorer performance, both in terms of subjective preference measures used in traditional DWA paradigm and in an equivalent task where performance can be objectively assessed. Therefore, we argue that reliance on conscious memory itself can explain the data. Thus the DWA paradigm is not an adequate method of assessing beneficial unconscious thought.", "title": "" }, { "docid": "4c03c0fc33f8941a7769644b5dfb62ef", "text": "A multiband MIMO antenna for a 4G mobile terminal is proposed. The antenna structure consists of a multiband main antenna element, a printed inverted-L subantenna element operating in the higher 2.5 GHz bands, and a wideband loop sub-antenna element working in lower 0.9 GHz band. In order to improve the isolation and ECC characteristics of the proposed MIMO antenna, each element is located at a different corner of the ground plane. In addition, the inductive coils are employed to reduce the antenna volume and realize the wideband property of the loop sub-antenna element. Finally, the proposed antenna covers LTE band 7/8, PCS, WiMAX, and WLAN service, simultaneously. The MIMO antenna has ECC lower than 0.15 and isolation higher than 12 dB in both lower and higher frequency bands.", "title": "" }, { "docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a", "text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.", "title": "" }, { "docid": "fbee148ef2de028cc53a371c27b4d2be", "text": "Desalination is a water-treatment process that separates salts from saline water to produce potable water or water that is low in total dissolved solids (TDS). Globally, the total installed capacity of desalination plants was 61 million m3 per day in 2008 [1]. Seawater desalination accounts for 67% of production, followed by brackish water at 19%, river water at 8%, and wastewater at 6%. Figure 1 show the worldwide feed-water percentage used in desalination. The most prolific users of desalinated water are in the Arab region, namely, Saudi Arabia, Kuwait, United Arab Emirates, Qatar, Oman, and Bahrain [2].", "title": "" }, { "docid": "af254a16b14a3880c9b8fe5b13f1a695", "text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.", "title": "" } ]
scidocsrr
9effaeade1a16756f3625880c2879c12
A Generalization of Regenerating Codes for Clustered Storage Systems
[ { "docid": "26597dea3d011243a65a1d2acdae19e8", "text": "Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. The bandwidth required to repair the system after a node failure also plays a crucial role in the system performance. In [1] authors have shown that a tradeoff exists between storage and repair bandwidth. They also have introduced the scheme of regenerating codes which meet this tradeoff. In this paper, a scheme of Exact Regenerating Codes is introduced, which are regenerating codes with an additional property of regenerating back the same node upon failure. For the minimum bandwidth point, which is suitable for applications like distributed mail servers, explicit construction for exact regenerating codes is provided. A subspace approach is provided, using which the necessary and sufficient conditions for a linear code to be an exact regenerating code are derived. This leads to the uniqueness of our construction. For the minimum storage point which suits applications such as storage in peer-to-peer systems, an explicit construction of regenerating codes for certain suitable parameters is provided. This code supports variable number of nodes and can handle multiple simultaneous node failures. The constructions given for both the points require a low field size and have low complexity.", "title": "" } ]
[ { "docid": "12eff845ccb6e5cc2b2fbe74935aff46", "text": "The study of this paper presents a new technique to use automatic number plate detection and recognition. This system plays a significant role throughout this busy world, owing to rise in use of vehicles day-by-day. Some of the applications of this software are automatic toll tax collection, unmanned parking slots, safety, and security. The current scenario happening in India is, people, break the rules of the toll and move away which can cause many serious issues like accidents. This system uses efficient algorithms to detect the vehicle number from real-time images. The system detects the license plate on the vehicle first and then captures the image of it. Vehicle number plate is localized and characters are segmented and further recognized with help of neural network. The system is designed for grayscale images so it detects the number plate regardless of its color. The resulting vehicle number plate is then compared with the available database of all vehicles which have been already registered by the users so as to come up with information about vehicle type and charge accordingly. The vehicle information such as date, toll amount is stored in the database to maintain the record.", "title": "" }, { "docid": "0ccde44cffc4d888668b14370e147529", "text": "Bitcoin is a crypto currency with several advantages over previous approaches. Transactions are con®rmed and stored by a peer-to-peer network in a blockchain. Therefore, all transactions are public and soon solutions where designed to increase privacy in Bitcoin Many come with downsides, like requiring a trusted third-party or requiring modi®cations to Bitcoin. In this paper, we compare these approaches according to several criteria. Based on our ®ndings, CoinJoin emerges as the best approach for anonymizing Bitcoins today.", "title": "" }, { "docid": "df4d0112eecfcc5c6c57784d1a0d010d", "text": "2 The design and measured results are reported on three prototype DC-DC converters which successfully demonstrate the design techniques of this thesis and the low-power enabling capabilities of DC-DC converters in portable applications. Voltage scaling for low-power throughput-constrained digital signal processing is reviewed and is shown to provide up to an order of magnitude power reduction compared to existing 3.3 V standards when enabled by high-efficiency low-voltage DC-DC conversion. A new ultra-low-swing I/O strategy, enabled by an ultra-low-voltage and low-power DCDC converter, is used to reduce the power of high-speed inter-chip communication by greater than two orders of magnitude. Dynamic voltage scaling is proposed to dynamically trade general-purpose processor throughput for energy-efficiency, yielding up to an order of magnitude improvement in the average energy per operation of the processor. This is made possible by a new class of voltage converter, called the dynamic DC-DC converter, whose primary performance objectives and design considerations are introduced in this thesis. Robert W. Brodersen, Chairman of Committee Table of", "title": "" }, { "docid": "08a7621fe99afba5ec9a78c76192f43d", "text": "Orthogonal Frequency Division Multiple Access (OFDMA) as well as other orthogonal multiple access techniques fail to achieve the system capacity limit in the uplink due to the exclusivity in resource allocation. This issue is more prominent when fairness among the users is considered in the system. Current Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by coding/spreading to facilitate the users' signals separation at the receiver, which degrade the system spectral efficiency. Hence, in order to achieve higher capacity, more efficient NOMA schemes need to be developed. In this paper, we propose a NOMA scheme for uplink that removes the resource allocation exclusivity and allows more than one user to share the same subcarrier without any coding/spreading redundancy. Joint processing is implemented at the receiver to detect the users' signals. However, to control the receiver complexity, an upper limit on the number of users per subcarrier needs to be imposed. In addition, a novel subcarrier and power allocation algorithm is proposed for the new NOMA scheme that maximizes the users' sum-rate. The link-level performance evaluation has shown that the proposed scheme achieves bit error rate close to the single-user case. Numerical results show that the proposed NOMA scheme can significantly improve the system performance in terms of spectral efficiency and fairness comparing to OFDMA.", "title": "" }, { "docid": "ff3229e4afdedd01a936c7e70f8d0d02", "text": "This paper highlights an updated anatomy of parametrial extension with emphasis on magnetic resonance imaging (MRI) assessment of disease spread in the parametrium in patients with locally advanced cervical cancer. Pelvic landmarks were identified to assess the anterior and posterior extensions of the parametria, besides the lateral extension, as defined in a previous anatomical study. A series of schematic drawings and MRI images are shown to document the anatomical delineation of disease on MRI, which is crucial not only for correct image-based three-dimensional radiotherapy but also for the surgical oncologist, since neoadjuvant chemoradiotherapy followed by radical surgery is emerging in Europe as a valid alternative to standard chemoradiation.", "title": "" }, { "docid": "f330cfad6e7815b1b0670217cd09b12e", "text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.", "title": "" }, { "docid": "27cc510f79a4ed76da42046b49bbb9fd", "text": "This article reports the orthodontic treatment ofa 25-year-old female patient whose chief complaint was the inclination of the maxillary occlusal plane in front view. The individualized vertical placement of brackets is described. This placement made possible a symmetrical occlusal plane to be achieved in a rather straightforward manner without the need for further technical resources.", "title": "" }, { "docid": "1e4ecef47048e1f724733fa19526935f", "text": "Theories of aggressive behavior and ethological observations in animals and children suggest the existence of distinct forms of reactive (hostile) and proactive (instrumental) aggression. Toward the validation of this distinction, groups of reactive aggressive, proactive aggressive, and nonaggressive children were identified (n = 624 9-12-year-olds). Social information-processing patterns were assessed in these groups by presenting hypothetical vignettes to subjects. 3 hypotheses were tested: (1) only the reactive-aggressive children would demonstrate hostile biases in their attributions of peers' intentions in provocation situations (because such biases are known to lead to reactive anger); (2) only proactive-aggressive children would evaluate aggression and its consequences in relatively positive ways (because proactive aggression is motivated by its expected external outcomes); and (3) proactive-aggressive children would select instrumental social goals rather than relational goals more often than nonaggressive children. All 3 hypotheses were at least partially supported.", "title": "" }, { "docid": "723cf2a8b6142a7e52a0ff3fb74c3985", "text": "The Internet of Mobile Things (IoMT) requires support for a data lifecycle process ranging from sorting, cleaning and monitoring data streams to more complex tasks such as querying, aggregation, and analytics. Current solutions for stream data management in IoMT have been focused on partial aspects of a data lifecycle process, with special emphasis on sensor networks. This paper aims to address this problem by developing an offline and real-time data lifecycle process that incorporates a layered, data-flow centric, and an edge/cloud computing approach that is needed for handling heterogeneous, streaming and geographicallydispersed IoMT data streams. We propose an end to end architecture to support an instant intra-layer communication that establishes a stream data flow in real-time to respond to immediate data lifecycle tasks at the edge layer of the system. Our architecture also provides offline functionalities for later analytics and visualization of IoMT data streams at the core layer of the system. Communication and process are thus the defining factors in the design of our stream data management solution for IoMT. We describe and evaluate our prototype implementation using real-time transit data feeds and a commercial edge-based platform. Preliminary results are showing the advantages of running data lifecycle tasks at the edge of the network for reducing the volume of data streams that are redundant and should not be transported to the cloud. Keywords—stream data lifecycle, edge computing, cloud computing, Internet of Mobile Things, end to end architectures", "title": "" }, { "docid": "f6d08e76bfad9c4988253b643163671a", "text": "This paper proposes a technique for unwanted lane departure detection. Initially, lane boundaries are detected using a combination of the edge distribution function and a modified Hough transform. In the tracking stage, a linear-parabolic lane model is used: in the near vision field, a linear model is used to obtain robust information about lane orientation; in the far field, a quadratic function is used, so that curved parts of the road can be efficiently tracked. For lane departure detection, orientations of both lane boundaries are used to compute a lane departure measure at each frame, and an alarm is triggered when such measure exceeds a threshold. Experimental results indicate that the proposed system can fit lane boundaries in the presence of several image artifacts, such as sparse shadows, lighting changes and bad conditions of road painting, being able to detect in advance involuntary lane crossings. q 2005 Elsevier Ltd All rights reserved.", "title": "" }, { "docid": "449dbec9bcfe268a5db432c116a61087", "text": "Cake appearance is an important attribute of freeze-dried products, which may or may not be critical with respect to product quality (i.e., safety and efficacy). Striving for \"uniform and elegant\" cake appearance may continue to remain an important goal during the design and development of a lyophilized drug product. However, \"sometimes\" a non-ideal cake appearance has no impact on product quality and is an inherent characteristic of the product (due to formulation, drug product presentation, and freeze-drying process). This commentary provides a summary of challenges related to visual appearance testing of freeze-dried products, particularly on how to judge the criticality of cake appearance. Furthermore, a harmonized nomenclature and description for variations in cake appearance from the ideal expectation of uniform and elegant is provided, including representative images. Finally, a science and risk-based approach is discussed on establishing acceptance criteria for cake appearance.", "title": "" }, { "docid": "37dcc23a5504466a5f8200f281487888", "text": "Computational approaches that 'dock' small molecules into the structures of macromolecular targets and 'score' their potential complementarity to binding sites are widely used in hit identification and lead optimization. Indeed, there are now a number of drugs whose development was heavily influenced by or based on structure-based design and screening strategies, such as HIV protease inhibitors. Nevertheless, there remain significant challenges in the application of these approaches, in particular in relation to current scoring schemes. Here, we review key concepts and specific features of small-molecule–protein docking methods, highlight selected applications and discuss recent advances that aim to address the acknowledged limitations of established approaches.", "title": "" }, { "docid": "7ea89697894cb9e0da5bfcebf63be678", "text": "This paper develops a frequency-domain iterative machine learning (IML) approach for output tracking. Frequency-domain iterative learning control allows bounded noncausal inversion of system dynamics and is, therefore, applicable to nonminimum phase systems. The model used in the frequency-domain control update can be obtained from the input–output data acquired during the iteration process. However, such data-based approaches can have challenges if the noise-to-output-signal ratio is large. The main contribution of this paper is the use of kernel-based machine learning during the iterations to estimate both the model (and its inverse) for the control update, as well as the model uncertainty needed to establish bounds on the iteration gain for ensuring convergence. Another contribution is the proposed use of augmented inputs with persistency of excitation to promote learning of the model during iterations. The improved model can be used to better infer the inverse input resulting in lower initial error for new output trajectories. The proposed IML approach with the augmented input is illustrated with simulations for a benchmark nonminimum phase example.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "bf48f9ac763b522b8d43cfbb281fbffa", "text": "We present a declarative framework for collective deduplication of entity references in the presence of constraints. Constraints occur naturally in many data cleaning domains and can improve the quality of deduplication. An example of a constraint is \"each paper has a unique publication venue''; if two paper references are duplicates, then their associated conference references must be duplicates as well. Our framework supports collective deduplication, meaning that we can dedupe both paper references and conference references collectively in the example above. Our framework is based on a simple declarative Datalog-style language with precise semantics. Most previous work on deduplication either ignoreconstraints or use them in an ad-hoc domain-specific manner. We also present efficient algorithms to support the framework. Our algorithms have precise theoretical guarantees for a large subclass of our framework. We show, using a prototype implementation, that our algorithms scale to very large datasets. We provide thoroughexperimental results over real-world data demonstrating the utility of our framework for high-quality and scalable deduplication.", "title": "" }, { "docid": "69a6cfb649c3ccb22f7a4467f24520f3", "text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.", "title": "" }, { "docid": "00ed53e43725d782b38c185faa2c8fd2", "text": "In this paper we evaluate tensegrity probes on the basis of the EDL phase performance of the probe in the context of a mission to Titan. Tensegrity probes are structurally designed around tension networks and are composed of tensile and compression elements. Such probes have unique physical force distribution properties and can be both landing and mobility platforms, allowing for dramatically simpler mission profile and reduced costs. Our concept is to develop a tensegrity probe in which the tensile network can be actively controlled to enable compact stowage for launch followed by deployment in preparation for landing. Due to their natural compliance and structural force distribution properties, tensegrity probes can safely absorb significant impact forces, enabling high speed Entry, Descent, and Landing (EDL) scenarios where the probe itself acts much like an airbag. However, unlike an airbag which must be discarded after a single use, the tensegrity probe can actively control its shape to provide compliant rolling mobility while still maintaining its ability to safely absorb impact shocks that might occur during exploration. (See Figure 1) This combination of functions from a single structure enables compact and light-weight planetary exploration missions with the capabilities of traditional wheeled rovers, but with the mass and cost similar or less than a stationary probe. In this paper we cover this new mission concept and tensegrity probe technologies for compact storage, EDL, and surface mobility, with an focus on analyzing the landing phase performance and ability to protect and deliver scientific payloads. The analysis is then supported with results from physical prototype drop-tests.", "title": "" }, { "docid": "5815fb8da17375f24bbdeab7af91f3a3", "text": "We introduce a new method for framesemantic parsing that significantly improves the prior state of the art. Our model leverages the advantages of a deep bidirectional LSTM network which predicts semantic role labels word by word and a relational network which predicts semantic roles for individual text expressions in relation to a predicate. The two networks are integrated into a single model via knowledge distillation, and a unified graphical model is employed to jointly decode frames and semantic roles during inference. Experiments on the standard FrameNet data show that our model significantly outperforms existing neural and non-neural approaches, achieving a 5.7 F1 gain over the current state of the art, for full frame structure extraction.", "title": "" }, { "docid": "6cb480efca7138e26ce484eb28f0caec", "text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.", "title": "" }, { "docid": "e2737102af24a27c4f531e5242807c76", "text": "We present the design, fabrication, and characterization of a fiber optically sensorized robotic hand for multi purpose manipulation tasks. The robotic hand has three fingers that enable both pinch and power grips. The main bone structure was made of a rigid plastic material and covered by soft skin. Both bone and skin contain embedded fiber optics for force and tactile sensing, respectively. Eight fiber optic strain sensors were used for rigid bone force sensing, and six fiber optic strain sensors were used for soft skin tactile sensing. For characterization, different loads were applied in two orthogonal axes at the fingertip and the sensor signals were measured from the bone structure. The skin was also characterized by applying a light load on different places for contact localization. The actuation of the hand was achieved by a tendon-driven under-actuated system. Gripping motions are implemented using an active tendon located on the volar side of each finger and connected to a motor. Opening motions of the hand were enabled by passive elastic tendons located on the dorsal side of each finger.", "title": "" } ]
scidocsrr
7459e0c8a32530a2615a218484e8a04d
Meta-analysis of the heritability of human traits based on fifty years of twin studies
[ { "docid": "b51fcfa32dbcdcbcc49f1635b44601ed", "text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.", "title": "" }, { "docid": "f4e73a0c766ce1ead78b2b770e641f61", "text": "Epistasis, or interactions between genes, has long been recognized as fundamentally important to understanding the structure and function of genetic pathways and the evolutionary dynamics of complex genetic systems. With the advent of high-throughput functional genomics and the emergence of systems approaches to biology, as well as a new-found ability to pursue the genetic basis of evolution down to specific molecular changes, there is a renewed appreciation both for the importance of studying gene interactions and for addressing these questions in a unified, quantitative manner.", "title": "" } ]
[ { "docid": "07cbbb184a627456922a1e66ae54d3d2", "text": "A maximum likelihood (ML) acoustic source location estimation method is presented for the application in a wireless ad hoc sensor network. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to the existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multiresolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expedite the computation of source locations. The Crame/spl acute/r-Rao Bound (CRB) of the ML source location estimate has been derived. The CRB is used to analyze the impacts of sensor placement to the accuracy of location estimates for single target scenario. Extensive simulations have been conducted. It is observed that the proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method.", "title": "" }, { "docid": "1a154992369fc30c36613fc811df53ac", "text": "Speech recognition is a subjective phenomenon. Despite being a huge research in this field, this process still faces a lot of problem. Different techniques are used for different purposes. This paper gives an overview of speech recognition process. Various progresses have been done in this field. In this work of project, it is shown that how the speech signals are recognized using back propagation algorithm in neural network. Voices of different persons of various ages", "title": "" }, { "docid": "fce49da5560a89cef5738cbcb41ad2bd", "text": "This paper conceptualizes IT service management (ITSM) capability, a key competence of today’s IT provider organizations, and presents a survey instrument to facilitate the measurement of an ITSM capability for research and practice. Based on the review of four existing ITSM maturity models (CMMISVC, COBIT 4.1, SPICE, ITIL v3), we first develop a multi-attributive scale to assess maturity on an ITSM process level. We then use this scale in a survey with 205 ITSM key informants who assessed IT provider organizations along a set of 26 established ITSM processes. Our exploratory factor analysis and measurement model assessment results support the validity of an operationalization of ITSM capability as a second-order construct formed by ITSM processes that span three dimensions: service planning, service transition, and service operation. The practical utility of our survey instrument and avenues for future research on ITSM capability are outlined.", "title": "" }, { "docid": "c4282486dad6f0fef06964bd3fa45272", "text": "In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Speci€cally, the toolkit provides a uni€ed data preparation module for di‚erent text matching problems, a ƒexible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interactionfocused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.", "title": "" }, { "docid": "07d0009e53d2ccdfe7888b12ac173cd0", "text": "This paper presents a training method that encodes each word into a different vector in semantic space and its relation to low entropy coding. Elman network is employed in the method to process word sequences from literary works. The trained codes possess reduced entropy and are used in ranking, indexing, and categorizing literary works. A modification of the method to train the multi-vector for each polysemous word is also presented where each vector represents a different meaning of its word. These multiple vectors can accommodate several different meanings of their word. This method is applied to the stylish analyses of two Chinese novels, Dream of the Red Chamber and Romance of the Three Kingdoms. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d509cb384ecddafa0c4f866882af2c77", "text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.", "title": "" }, { "docid": "9039058c93aeaa99dae15617e5032b33", "text": "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms stateof-the-art cross-domain recommendation methods.", "title": "" }, { "docid": "25c25864ac5584b99aacbda88bda6203", "text": "Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans. Previous work in music generation has mainly been focused on creating a single melody. More recent work on polyphonic music modeling, centered around time series probability density estimation, has met some partial success. In particular, there has been a lot of work based off of Recurrent Neural Networks combined with Restricted Boltzmann Machines (RNNRBM) and other similar recurrent energy based models. Our approach, however, is to perform end-to-end learning and generation with deep neural nets alone.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "88976f137ea43b1be8d133ddc4124af2", "text": "Real-time stereo vision is attractive in many areas such as outdoor mapping and navigation. As a popular accelerator in the image processing field, GPU is widely used for the studies of the stereo vision algorithms. Recently, many stereo vision systems on GPU have achieved low error rate, as a result of the development of deep learning. However, their processing speed is normally far from the real-time requirement. In this paper, we propose a real-time stereo vision system on GPU for the high-resolution images. This system also maintains a low error rate compared with other fast systems. In our approach, the image is resized to reduce the computational complexity and to realize the real-time processing. The low error rate is kept by using the cost aggregation with multiple blocks, secondary matching and sub-pixel estimation. Its processing speed is 41 fps for $2888\\times 1920$ pixels images when the maximum disparity is 760.", "title": "" }, { "docid": "01ee0af8491087a7c50002c7d6b7411e", "text": "The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Yet they are useful and make our PANet reach the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. PANet is also state-of-the-art on MVD and Cityscapes.", "title": "" }, { "docid": "e59f53449783b3b7aceef8ae3b43dae1", "text": "W E use the definitions of (11). However, in deference to some recent attempts to unify the terminology of graph theory we replace the term 'circuit' by 'polygon', and 'degree' by 'valency'. A graph G is 3-connected (nodally 3-connected) if it is simple and non-separable and satisfies the following condition; if G is the union of two proper subgraphs H and K such that HnK consists solely of two vertices u and v, then one of H and K is a link-graph (arc-graph) with ends u and v. It should be noted that the union of two proper subgraphs H and K of G can be the whole of G only if each of H and K includes at least one edge or vertex not belonging to the other. In this paper we are concerned mainly with nodally 3-connected graphs, but a specialization to 3-connected graphs is made in § 12. In § 3 we discuss conditions for a nodally 3-connected graph to be planar, and in § 5 we discuss conditions for the existence of Kuratowski subgraphs of a given graph. In §§ 6-9 we show how to obtain a convex representation of a nodally 3-connected graph, without Kuratowski subgraphs, by solving a set of linear equations. Some extensions of these results to general graphs, with a proof of Kuratowski's theorem, are given in §§ 10-11. In § 12 we discuss the representation in the plane of a pair of dual graphs, and in § 13 we draw attention to some unsolved problems.", "title": "" }, { "docid": "ea5a455bca9ff0dbb1996bd97d89dfe5", "text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.", "title": "" }, { "docid": "c28ee3a41d05654eedfd379baf2d5f24", "text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.", "title": "" }, { "docid": "c5cfe386f6561eab1003d5572443612e", "text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.", "title": "" }, { "docid": "51e2f490072820230d71f648d70babcb", "text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.", "title": "" }, { "docid": "269c1cb7fe42fd6403733fdbd9f109e3", "text": "Myofibroblasts are the key players in extracellular matrix remodeling, a core phenomenon in numerous devastating fibrotic diseases. Not only in organ fibrosis, but also the pivotal role of myofibroblasts in tumor progression, invasion and metastasis has recently been highlighted. Myofibroblast targeting has gained tremendous attention in order to inhibit the progression of incurable fibrotic diseases, or to limit the myofibroblast-induced tumor progression and metastasis. In this review, we outline the origin of myofibroblasts, their general characteristics and functions during fibrosis progression in three major organs: liver, kidneys and lungs as well as in cancer. We will then discuss the state-of-the art drug targeting technologies to myofibroblasts in context of the above-mentioned organs and tumor microenvironment. The overall objective of this review is therefore to advance our understanding in drug targeting to myofibroblasts, and concurrently identify opportunities and challenges for designing new strategies to develop novel diagnostics and therapeutics against fibrosis and cancer.", "title": "" }, { "docid": "af7f83599c163d0f519f1e2636ae8d44", "text": "There is a set of characterological attributes thought to be associated with developing success at critical thinking (CT). This paper explores the disposition toward CT theoretically, and then as it appears to be manifest in college students. Factor analytic research grounded in a consensus-based conceptual analysis of CT described seven aspects of the overall disposition toward CT: truth-seeking, open-mindedness, analyticity, systematicity, CTconfidence, inquisitiveness, and cognitive maturity. The California Critical Thinking Disposition Inventory (CCTDI), developed in 1992, was used to sample college students at two comprehensive universities. Entering college freshman students showed strengths in openmindedness and inquisitiveness, weaknesses in systematicity and opposition to truth-seeking. Additional research indicates the disposition toward CT is highly correlated with the psychological constructs of absorption and openness to experience, and strongly predictive of ego-resiliency. A preliminary study explores the interesting and potentially complex interrelationship between the disposition toward CT and CT abilities. In addition to the significance of this work for psychological studies of human development, empirical research on the disposition toward CT promises important implications for all levels of education. 1 This essay appeared as Facione, PA, Sánchez, (Giancarlo) CA, Facione, NC & Gainen, J., (1995). The disposition toward critical thinking. Journal of General Education. Volume 44, Number(1). 1-25.", "title": "" }, { "docid": "023285cbd5d356266831fc0e8c176d4f", "text": "The two authorsLakoff, a linguist and Nunez, a psychologistpurport to introduce a new field of study, i.e. \"mathematical idea analysis\", with this book. By \"mathematical idea analysis\", they mean to give a scientifically plausible account of mathematical concepts using the apparatus of cognitive science. This approach is meant to be a contribution to academics and possibly education as it helps to illuminate how we cognitise mathematical concepts, which are supposedly undecipherable and abstruse to laymen. The analysis of mathematical ideas, the authors claim, cannot be done within mathematics, for even metamathematicsrecursive theory, model theory, set theory, higherorder logic still requires mathematical idea analysis in itself! Formalism, by its very nature, voids symbols of their meanings and thus cognition is required to imbue meaning. Thus, there is a need for this new field, in which the authors, if successful, would become pioneers.", "title": "" }, { "docid": "0824992bb506ac7c8a631664bf608086", "text": "There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.", "title": "" } ]
scidocsrr
f4a0c4e0bffa5e0e47db1a7f268dc27e
BLEWS: Using Blogs to Provide Context for News Articles
[ { "docid": "77125ee1f92591489ee5d933710cc1f1", "text": "Subjectivity in natural language refers to aspects of language used to express opinions, evaluations, and speculations. There are numerous natural language processing applications for which subjectivity analysis is relevant, including information extraction and text categorization. The goal of this work is learning subjective language from corpora. Clues of subjectivity are generated and tested, including low-frequency words, collocations, and adjectives and verbs identified using distributional similarity. The features are also examined working together in concert. The features, generated from different data sets using different procedures, exhibit consistency in performance in that they all do better and worse on the same data sets. In addition, this article shows that the density of subjectivity clues in the surrounding context strongly affects how likely it is that a word is subjective, and it provides the results of an annotation study assessing the subjectivity of sentences with high-density features. Finally, the clues are used to perform opinion piece recognition (a type of text categorization and genre detection) to demonstrate the utility of the knowledge acquired in this article.", "title": "" } ]
[ { "docid": "71b09fba5c4054af268da7c0037253e6", "text": "Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.", "title": "" }, { "docid": "c55c339eb53de3a385df7d831cb4f24b", "text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.", "title": "" }, { "docid": "653fee86af651e13e0d26fed35ef83e4", "text": "Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided.", "title": "" }, { "docid": "eea57066c7cd0b778188c2407c8365f3", "text": "For over two decades, video streaming over the Internet has received a substantial amount of attention from both academia and industry. Starting from the design of transport protocols for streaming video, research interests have later shifted to the peer-to-peer paradigm of designing streaming protocols at the application layer. More recent research has focused on building more practical and scalable systems, using Dynamic Adaptive Streaming over HTTP. In this article, we provide a retrospective view of the research results over the past two decades, with a focus on peer-to-peer streaming protocols and the effects of cloud computing and social media.", "title": "" }, { "docid": "d72e4df2e396a11ae7130ca7e0b2fb56", "text": "Advances in location-acquisition and wireless communication technologies have led to wider availability of spatio-temporal (ST) data, which has unique spatial properties (i.e. geographical hierarchy and distance) and temporal properties (i.e. closeness, period and trend). In this paper, we propose a <u>Deep</u>-learning-based prediction model for <u>S</u>patio-<u>T</u>emporal data (DeepST). We leverage ST domain knowledge to design the architecture of DeepST, which is comprised of two components: spatio-temporal and global. The spatio-temporal component employs the framework of convolutional neural networks to simultaneously model spatial near and distant dependencies, and temporal closeness, period and trend. The global component is used to capture global factors, such as day of the week, weekday or weekend. Using DeepST, we build a real-time crowd flow forecasting system called UrbanFlow1. Experiment results on diverse ST datasets verify DeepST's ability to capture ST data's spatio-temporal properties, showing the advantages of DeepST beyond four baseline methods.", "title": "" }, { "docid": "92a0fb602276952962762b07e7cd4d2b", "text": "Representation of video is a vital problem in action recognition. This paper proposes Stacked Fisher Vectors (SFV), a new representation with multi-layer nested Fisher vector encoding, for action recognition. In the first layer, we densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors (FVs). The second layer compresses the FVs of subvolumes obtained in previous layer, and then encodes them again with Fisher vectors. Compared with standard FV, SFV allows refining the representation and abstracting semantic information in a hierarchical way. Compared with recent mid-level based action representations, SFV need not to mine discriminative action parts but can preserve mid-level information through Fisher vector encoding in higher layer. We evaluate the proposed methods on three challenging datasets, namely Youtube, J-HMDB, and HMDB51. Experimental results demonstrate the effectiveness of SFV, and the combination of the traditional FV and SFV outperforms stateof-the-art methods on these datasets with a large margin.", "title": "" }, { "docid": "635da218aa9a1b528fbc378844b393fd", "text": "A variety of nonlinear, including semidefinite, relaxations have been developed in recent years for nonconvex optimization problems. Their potential can be realized only if they can be solved with sufficient speed and reliability. Unfortunately, state-of-the-art nonlinear programming codes are significantly slower and numerically unstable compared to linear programming software. In this paper, we facilitate the reliable use of nonlinear convex relaxations in global optimization via a polyhedral branch-and-cut approach. Our algorithm exploits convexity, either identified automatically or supplied through a suitable modeling language construct, in order to generate polyhedral cutting planes and relaxations for multivariate nonconvex problems. We prove that, if the convexity of a univariate or multivariate function is apparent by decomposing it into convex subexpressions, our relaxation constructor automatically exploits this convexity in a manner that is much superior to developing polyhedral outer approximators for the original function. The convexity of functional expressions that are composed to form nonconvex expressions is also automatically exploited. Root-node relaxations are computed for 87 problems from globallib and minlplib, and detailed computational results are presented for globally solving 26 of these problems with BARON 7.2, which implements the proposed techniques. The use of cutting planes for these problems reduces root-node relaxation gaps by up to 100% and expedites the solution process, often by several orders of magnitude.", "title": "" }, { "docid": "961c4da65983926a8bc06189f873b006", "text": "By studying two well known hypotheses in economics, this paper illustrates how emergent properties can be shown in an agent-based artificial stock market. The two hypotheses considered are the efficient market hypothesis and the rational expectations hypothesis. We inquire whether the macrobehavior depicted by these two hypotheses is consistent with our understanding of the microbehavior. In this agent-based model, genetic programming is applied to evolving a population of traders learning over time. We first apply a series of econometric tests to show that the EMH and the REH can be satisfied with some portions of the artificial time series. Then, by analyzing traders’ behavior, we show that these aggregate results cannot be interpreted as a simple scaling-up of individual behavior. A conjecture based on sunspot-like signals is proposed to explain why macrobehavior can be very different from microbehavior. We assert that the huge search space attributable to genetic programming can induce sunspot-like signals, and we use simulated evolved complexity of forecasting rules and Granger causality tests to examine this assertion. © 2002 Elsevier Science B.V. All rights reserved. JEL classification: G12: asset pricing; G14: information and market efficiency; D83: search, learning, and information", "title": "" }, { "docid": "7d57caa810120e1590ad277fb8113222", "text": "Cancer is increasing the total number of unexpected deaths around the world. Until now, cancer research could not significantly contribute to a proper solution for the cancer patient, and as a result, the high death rate is uncontrolled. The present research aim is to extract the significant prevention factors for particular types of cancer. To find out the prevention factors, we first constructed a prevention factor data set with an extensive literature review on bladder, breast, cervical, lung, prostate and skin cancer. We subsequently employed three association rule mining algorithms, Apriori, Predictive apriori and Tertius algorithms in order to discover most of the significant prevention factors against these specific types of cancer. Experimental results illustrate that Apriori is the most useful association rule-mining algorithm to be used in the discovery of prevention factors.", "title": "" }, { "docid": "61ad7938355b899b2934bed1d5777e95", "text": "Erythema annulare centrifugum (EAC) is a disease of unknown etiology, although it has been variously associated with hypersensitivity reactions, infections, hormonal disorders, rheumatological and liver diseases, dysproteinemias, drugs, and occult tumors. García-Muret et al described a subtype of EAC with annual relapses that occurred in the summer.1 Nonetheless, in our hospital we have observed how 2 patients with longstanding EAC presented a clear clinical improvement in response to natural exposure to sunlight during the summer months. The first patient was a woman aged 22 years, with no known diseases, who presented an 8-year history of lesions on the trunk and upper and lower limbs. The asymptomatic and sometimes slightly pruritic lesions underwent episodes of centrifugal spread in winter. Examination revealed several erythematous plaques varying in size between 2 cm and 8 cm. The larger lesions were annular, with erythematous borders that were slightly more raised and trailing scale (Figure 1). A culture of scales from the lesions was negative on 2 occasions. The Spanish Contact Dermatitis and Skin Allergy Research Group (GEIDAC) standard battery of patch tests were all negative. Complete blood count, basic blood chemistry, antibody test, chest x-ray, and abdominal ultrasound results were normal. Superficial perivascular dermatitis was reported on the 2 occasions biopsies were performed. The patient had failed to respond to treatment with topical corticosteroids and antifungal agents. To prevent possible postinflammatory hyperpigmentation the patient had avoided sunbathing during the summer months. However, her last revision revealed that the lesions had completely disappeared following continuous sun exposure during her holidays (Figure 2). The second patient was a man aged 27 years. Since the age of 16 years he had presented with occasional flare-ups, on the trunk and limbs, of erythematous lesions with centrifugal spread and a scaly border. Routine blood tests and antinuclear antibodies were normal or negative, cultures were negative, and a histopathology study merely showed nonspecific chronic dermatitis. Flare-ups were not associated with any triggering factor, and the lesions had not responded to treatment with antifungal agents or topical corticosteroids. Nonetheless, the patient’s condition had improved during the summer, coinciding with exposure to sunlight. EAC, which was originally described by Darier in 1916, presents as annular plaques with clear central areas and slightly raised erythematous borders with trailing scale. Centrifugal growth gives rise to polycyclic patterns in the plaques. The disease follows a chronic course marked by exacerbations and remissions. The most frequent lesion", "title": "" }, { "docid": "0a65c096f91206c868f05bea9acc28fd", "text": "This paper presents a review on recent developments in BLDC motor controllers and studies on four quadrant operation of BLDC drive along with active PFC. The main areas reviewed include Sensor-less control, Direct Torque Control (DTC), Fuzzy logic control, controller for four quadrant operation and active Power Factor Corrected (PFC) converter fed BLDC motor drive. A comprehensive study has been done on four quadrant operation and active PFC converter fed BLDC motor drive with simulation in MATLAB/SIMULINK. The proposed control algorithm for four quadrant operation detects the speed reversal requirement and changes the quadrant of operation accordingly. In PFC converter fed BLDC motor drive, a Boost converter working in continuous current mode is designed to improve the supply power factor.", "title": "" }, { "docid": "1a6e9229f6bc8f6dc0b9a027e1d26607", "text": "− This work illustrates an analysis of Rogowski coils for power applications, when operating under non ideal measurement conditions. The developed numerical model, validated by comparison with other methods and experiments, enables to investigate the effects of the geometrical and constructive parameters on the measurement behavior of the coil.", "title": "" }, { "docid": "3f07c471245b2e8cc369bc591a035201", "text": "Test automation is a widely-used approach to reduce the cost of manual software testing. However, if it is not planned or conducted properly, automated testing would not necessarily be more cost effective than manual testing. Deciding what parts of a given System Under Test (SUT) should be tested in an automated fashion and what parts should remain manual is a frequently-asked and challenging question for practitioner testers. In this study, we propose a search-based approach for deciding what parts of a given SUT should be tested automatically to gain the highest Return On Investment (ROI). This work is the first systematic approach for this problem, and significance of our approach is that it considers automation in the entire testing process (i.e., from test-case design, to test scripting, to test execution, and test-result evaluation). The proposed approach has been applied in an industrial setting in the context of a software product used in the oil and gas industry in Canada. Among the results of the case study is that, when planned and conducted properly using our decision-support approach, test automation provides the highest ROI. In this study, we show that if automation decision is taken effectively, test-case design, test execution, and test evaluation can result in about 307%, 675%, and 41% ROI in 10 rounds of using automated test suites.", "title": "" }, { "docid": "0d11c7f94973be05d906f94238d706e4", "text": "Head-Mounted Displays (HMDs) combined with 3-or-more Degree-of-Freedom (DoF) input enable rapid manipulation of stereoscopic 3D content. However, such input is typically performed with hands in midair and therefore lacks precision and stability. Also, recent consumer-grade HMDs suffer from limited angular resolution and/or limited field-of-view as compared to a desktop monitor. We present the DualCAD system that implements two solutions to these problems. First, the user may freely switch at runtime between an augmented reality HMD mode, and a traditional desktop mode with precise 2D mouse input and an external desktop monitor. Second, while in the augmented reality HMD mode, the user holds a smartphone in their non-dominant hand that is tracked with 6 DoF, allowing it to be used as a complementary high-resolution display as well as an alternative input device for stylus or multitouch input. Two novel bimanual interaction techniques that leverage the properties of the smartphone are presented. We also report initial user feedback.", "title": "" }, { "docid": "d8c5ff196db9acbea12e923b2dcef276", "text": "MoS<sub>2</sub>-graphene-based hybrid structures are biocompatible and useful in the field of biosensors. Herein, we propose a heterostructured MoS<sub>2</sub>/aluminum (Al) film/MoS<sub>2</sub>/graphene as a highly sensitive surface plasmon resonance (SPR) biosensor based on the Otto configuration. The sensitivity of the proposed biosensor is enhanced by using three methods. First, prisms of different refractive index have been discussed and it is found that sensitivity can be enhanced by using a low refractive index prism. Second, the influence of the thickness of the air layer on the sensitivity is analyzed and the optimal thickness of air is obtained. Finally, the sensitivity improvement and mechanism by using molybdenum disulfide (MoS<sub>2</sub>)–graphene hybrid structure is revealed. The maximum sensitivity ∼ 190.83°/RIU is obtained with six layers of MoS<sub>2</sub> coating on both surfaces of Al thin film.", "title": "" }, { "docid": "405022c5a2ca49973eaaeb1e1ca33c0f", "text": "BACKGROUND\nPreanalytical factors are the main source of variation in clinical chemistry testing and among the major determinants of preanalytical variability, sample hemolysis can exert a strong influence on result reliability. Hemolytic samples are a rather common and unfavorable occurrence in laboratory practice, as they are often considered unsuitable for routine testing due to biological and analytical interference. However, definitive indications on the analytical and clinical management of hemolyzed specimens are currently lacking. Therefore, the present investigation evaluated the influence of in vitro blood cell lysis on routine clinical chemistry testing.\n\n\nMETHODS\nNine aliquots, prepared by serial dilutions of homologous hemolyzed samples collected from 12 different subjects and containing a final concentration of serum hemoglobin ranging from 0 to 20.6 g/L, were tested for the most common clinical chemistry analytes. Lysis was achieved by subjecting whole blood to an overnight freeze-thaw cycle.\n\n\nRESULTS\nHemolysis interference appeared to be approximately linearly dependent on the final concentration of blood-cell lysate in the specimen. This generated a consistent trend towards overestimation of alanine aminotransferase (ALT), aspartate aminotransferase (AST), creatinine, creatine kinase (CK), iron, lactate dehydrogenase (LDH), lipase, magnesium, phosphorus, potassium and urea, whereas mean values of albumin, alkaline phosphatase (ALP), chloride, gamma-glutamyltransferase (GGT), glucose and sodium were substantially decreased. Clinically meaningful variations of AST, chloride, LDH, potassium and sodium were observed in specimens displaying mild or almost undetectable hemolysis by visual inspection (serum hemoglobin < 0.6 g/L). The rather heterogeneous and unpredictable response to hemolysis observed for several parameters prevented the adoption of reliable statistic corrective measures for results on the basis of the degree of hemolysis.\n\n\nCONCLUSION\nIf hemolysis and blood cell lysis result from an in vitro cause, we suggest that the most convenient corrective solution might be quantification of free hemoglobin, alerting the clinicians and sample recollection.", "title": "" }, { "docid": "10a2fefd81b61e3184d3fdc018ff42ab", "text": "Recently, models based on deep neural networks have dominated the fields of scene text detection and recognition. In this paper, we investigate the problem of scene text spotting, which aims at simultaneous text detection and recognition in natural images. An end-to-end trainable neural network model for scene text spotting is proposed. The proposed model, named as Mask TextSpotter, is inspired by the newly published work Mask R-CNN. Different from previous methods that also accomplish text spotting with end-to-end trainable deep neural networks, Mask TextSpotter takes advantage of simple and smooth end-to-end learning procedure, in which precise text detection and recognition are acquired via semantic segmentation. Moreover, it is superior to previous methods in handling text instances of irregular shapes, for example, curved text. Experiments on ICDAR2013, ICDAR2015 and Total-Text demonstrate that the proposed method achieves state-of-the-art results in both scene text detection and end-to-end text recognition tasks.", "title": "" }, { "docid": "9ffb4220530a4758ea6272edf6e7e531", "text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.", "title": "" }, { "docid": "b7bf40c61ff4c73a8bbd5096902ae534", "text": "—In therapeutic and functional applications transcutaneous electrical stimulation (TES) is still the most frequently applied technique for muscle and nerve activation despite the huge efforts made to improve implantable technologies. Stimulation electrodes play the important role in interfacing the tissue with the stimulation unit. Between the electrode and the excitable tissue there are a number of obstacles in form of tissue resistivities and permittivities that can only be circumvented by magnetic fields but not by electric fields and currents. However, the generation of magnetic fields needed for the activation of excitable tissues in the human body requires large and bulky equipment. TES devices on the other hand can be built cheap, small and light weight. The weak part in TES is the electrode that cannot be brought close enough to the excitable tissue and has to fulfill a number of requirements to be able to act as efficient as possible. The present review article summarizes the most important factors that influence efficient TES, presents and discusses currently used electrode materials, designs and configurations, and points out findings that have been obtained through modeling, simulation and testing.", "title": "" }, { "docid": "43f1cc712b3803ef7ac8273136dbe75d", "text": "Improved understanding of the anatomy and physiology of the aging face has laid the foundation for adopting an earlier and more comprehensive approach to facial rejuvenation, shifting the focus from individual wrinkle treatment and lift procedures to a holistic paradigm that considers the entire face and its structural framework. This article presents an overview of a comprehensive method to address facial aging. The key components to the reported strategy for improving facial cosmesis include, in addition to augmentation of volume loss, protection with sunscreens and antioxidants; promotion of epidermal cell turnover with techniques such as superficial chemical peels; microlaser peels and microdermabrasion; collagen stimulation and remodeling via light, ultrasound, or radiofrequency (RF)-based methods; and muscle control with botulinum toxin. For the treatment of wrinkles and for the augmentation of pan-facial dermal lipoatrophy, several types of fillers and volumizers including hyaluronic acid (HA), autologous fat, and calcium hydroxylapatite (CaHA) or injectable poly-l-lactic acid (PLLA) are available. A novel bimodal, trivector technique to restore structural facial volume loss that combines supraperiosteal depot injections of volume-depleted fat pads and dermal/subcutaneous injections for panfacial lipoatrophy with PLLA is presented. The combination of treatments with fillers; toxins; light-, sound-, and RF-based technologies; and surgical procedures may help to forestall the facial aging process and provide more natural results than are possible with any of these techniques alone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .", "title": "" } ]
scidocsrr
4d6e05afcf60f8348b92ec5f326e51da
A Mechanism for Turing Pattern Formation with Active and Passive Transport
[ { "docid": "f4db5b7cc70661ff780c96cd58f6624e", "text": "Error Thresholds and Their Relation to Optimal Mutation Rates p. 54 Are Artificial Mutation Biases Unnatural? p. 64 Evolving Mutation Rates for the Self-Optimisation of Genetic Algorithms p. 74 Statistical Reasoning Strategies in the Pursuit and Evasion Domain p. 79 An Evolutionary Method Using Crossover in a Food Chain Simulation p. 89 On Self-Reproduction and Evolvability p. 94 Some Techniques for the Measurement of Complexity in Tierra p. 104 A Genetic Neutral Model for Quantitative Comparison of Genotypic Evolutionary Activity p. 109", "title": "" } ]
[ { "docid": "3bc9e621a0cfa7b8791ae3fb94eff738", "text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.", "title": "" }, { "docid": "b94d33cc0366703b48d75ad844422c85", "text": "We propose a dataflow architecture, called HyperFlow, that offers a supporting infrastructure that creates an abstraction layer over computation resources and naturally exposes heterogeneous computation to dataflow processing. In order to show the efficiency of our system as well as testing it, we have included a set of synthetic and real-case applications. First, we designed a general suite of micro-benchmarks that captures main parallel pipeline structures and allows evaluation of HyperFlow under different stress conditions. Finally, we demonstrate the potential of our system with relevant applications in visualization. Implementations in HyperFlow are shown to have greater performance than actual hand-tuning codes, yet still providing high scalability on different platforms.", "title": "" }, { "docid": "2dd42cce112c61950b96754bb7b4df10", "text": "Hierarchical methods have been widely explored for object recognition, which is a critical component of scene understanding. However, few existing works are able to model the contextual information (e.g., objects co-occurrence) explicitly within a single coherent framework for scene understanding. Towards this goal, in this paper we propose a novel three-level (superpixel level, object level and scene level) hierarchical model to address the scene categorization problem. Our proposed model is a coherent probabilistic graphical model that captures the object co-occurrence information for scene understanding with a probabilistic chain structure. The efficacy of the proposed model is demonstrated by conducting experiments on the LabelMe dataset.", "title": "" }, { "docid": "cde1419d6b4912b414a3c83139dc3f06", "text": "This book results from a decade of presenting the user-centered design (UCD) methodology for hundreds of companies (p. xxiii) and appears to be the book complement to the professional development short course. Its purpose is to encourage software developers to focus on the total user experience of software products during the whole of the development cycle. The notion of the “total user experience” is valuable because it focuses attention on the whole product-use cycle, from initial awareness through productive use.", "title": "" }, { "docid": "72782fdcc61d1059bce95fe4e7872f5b", "text": "ÐIn object prototype learning and similar tasks, median computation is an important technique for capturing the essential information of a given set of patterns. In this paper, we extend the median concept to the domain of graphs. In terms of graph distance, we introduce the novel concepts of set median and generalized median of a set of graphs. We study properties of both types of median graphs. For the more complex task of computing generalized median graphs, a genetic search algorithm is developed. Experiments conducted on randomly generated graphs demonstrate the advantage of generalized median graphs compared to set median graphs and the ability of our genetic algorithm to find approximate generalized median graphs in reasonable time. Application examples with both synthetic and nonsynthetic data are shown to illustrate the practical usefulness of the concept of median graphs. Index TermsÐMedian graph, graph distance, graph matching, genetic algorithm,", "title": "" }, { "docid": "9b504f633488016fad865dee6fbdf3ef", "text": "Transmission lines is the important factor of the power system. Transmission and distribution lines has good contribution in the generating unit and consumers to obtain the continuity of electric supply. To economically transfer high power between systems and from control generating field. Transmission line run over hundreds of kilometers to supply electrical power to the consumers. It is a required for industries to detect the faults in the power system as early as possible. “Fault Detection and Auto Line Distribution System With GSM Module” is a automation technique used for fault detection in AC supply and auto sharing of power. The significance undetectable faults is that they represent a serious public safety hazard as well as a risk of arcing ignition of fires. This paper represents under voltage and over current fault detection. It is useful in technology to provide many applications like home, industry etc..", "title": "" }, { "docid": "641d09ff15b731b679dbe3e9004c1578", "text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.", "title": "" }, { "docid": "de7eb0735d6cd2fb13a00251d89b0fbc", "text": "Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.", "title": "" }, { "docid": "d66f86ac2b42d13ba2199e41c85d3c93", "text": "We introduce Fluid Annotation, an intuitive human-machine collaboration interface for annotating the class label and outline of every object and background region in an image. Fluid annotation is based on three principles:(I) Strong Machine-Learning aid. We start from the output of a strong neural network model, which the annotator can edit by correcting the labels of existing regions, adding new regions to cover missing objects, and removing incorrect regions.The edit operations are also assisted by the model.(II) Full image annotation in a single pass. As opposed to performing a series of small annotation tasks in isolation [51,68], we propose a unified interface for full image annotation in a single pass.(III) Empower the annotator.We empower the annotator to choose what to annotate and in which order. This enables concentrating on what the ma-chine does not already know, i.e. putting human effort only on the errors it made. This helps using the annotation budget effectively.\n Through extensive experiments on the COCO+Stuff dataset [11,51], we demonstrate that Fluid Annotation leads to accurate an-notations very efficiently, taking 3x less annotation time than the popular LabelMe interface [70].", "title": "" }, { "docid": "de8661c2e63188464de6b345bfe3a908", "text": "Modern computer games show potential not just for engaging and entertaining users, but also in promoting learning. Game designers employ a range of techniques to promote long-term user engagement and motivation. These techniques are increasingly being employed in so-called serious games, games that have nonentertainment purposes such as education or training. Although such games share the goal of AIED of promoting deep learner engagement with subject matter, the techniques employed are very different. Can AIED technologies complement and enhance serious game design techniques, or does good serious game design render AIED techniques superfluous? This paper explores these questions in the context of the Tactical Language Training System (TLTS), a program that supports rapid acquisition of foreign language and cultural skills. The TLTS combines game design principles and game development tools with learner modelling, pedagogical agents, and pedagogical dramas. Learners carry out missions in a simulated game world, interacting with non-player characters. A virtual aide assists the learners if they run into difficulties, and gives performance feedback in the context of preparatory exercises. Artificial intelligence plays a key role in controlling the behaviour of the non-player characters in the game; intelligent tutoring provides supplementary scaffolding.", "title": "" }, { "docid": "91771b6c50d7193e5612d9552913dec8", "text": "The expected diffusion of EVehicles (EVs) to limit the impact of fossil fuel on mobility is going to cause severe issues to the management of electric grid. A large number of charging stations is going to be installed on the power grid to support EVs. Each of the charging station could require more than 100 kW from the grid. The grid consumption is unpredictable and it depends from the need of EVs in the neighborhood. The impact of the EV on the power grid can be limited by the proper exploitation of Vehicle to Grid communication (V2G). The advent of Low Power Wide Area Network (LPWAN) promoted by Internet Of Things applications offers new opportunity for wireless communications. In this work, an example of such a technology (the LoRaWAN solution) is tested in a real-world scenario as a candidate for EV to grid communications. The experimental results highlight as LoRaWAN technology can be used to cover an area with a radius under 2 km, in an urban environment. At this distance, the Received Signal Strength Indicator (RSSI) is about −117 dBm. Such a result demonstrates the feasibility of the proposed approach.", "title": "" }, { "docid": "1e1d3d7a4997f6f58b7ed3f6b4ecb054", "text": "Image semantic segmentation is the task of partitioning image into several regions based on semantic concepts. In this paper, we learn a weakly supervised semantic segmentation model from social images whose labels are not pixel-level but image-level; furthermore, these labels might be noisy. We present a joint conditional random field model leveraging various contexts to address this issue. More specifically, we extract global and local features in multiple scales by convolutional neural network and topic model. Inter-label correlations are captured by visual contextual cues and label co-occurrence statistics. The label consistency between image-level and pixel-level is finally achieved by iterative refinement. Experimental results on two real-world image datasets PASCAL VOC2007 and SIFT-Flow demonstrate that the proposed approach outperforms state-of-the-art weakly supervised methods and even achieves accuracy comparable with fully supervised methods.", "title": "" }, { "docid": "9a82781af933251208aef5e683839346", "text": "We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems’ mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems’ optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and the Intel RealSense D400 (formally RS400).", "title": "" }, { "docid": "3c28d7571e8d863b84ccf4edfc812dc6", "text": "The purpose of this project was to explore what attitudes physicians, nurses, and operating room technicians had about working with Certified Registered Nurse Anesthetists (CRNAs) to better understand practice barriers and facilitators. This Q methodology study used a purposive sample of operating room personnel from four institutions in the Midwestern United States. Participants completed a -4 to +4 rank-ordering of their level of agreement with 34 attitude statements representing a wide range of beliefs about nurse anesthetists. Centroid factor analysis with varimax rotation was used to analyze 24 returned Q sorts. Three distinct viewpoints emerged that explained 66% of the variance: favoring unrestricted practice, favoring anesthesiologist supervision, and favoring anesthesiologist practice. Research is needed on how to develop workplace attitudes that support autonomous nurse anesthetist practice and to understand preferences for restricted practice in team members other than physicians.", "title": "" }, { "docid": "57256bce5741b23fa4827fad2ad9e321", "text": "This study assessed the depth of online learning, with a focus on the nature of online interaction in four distance education course designs. The Study Process Questionnaire was used to measure the shift in students’ approach to learning from the beginning to the end of the courses. Design had a significant impact on the nature of the interaction and whether students approached learning in a deep and meaningful manner. Structure and leadership were found to be crucial for online learners to take a deep and meaningful approach to learning.", "title": "" }, { "docid": "c87a1cea06d135628691a912cad582c1", "text": "OBJECTIVE\nDelphi technique is a structured process commonly used to developed healthcare quality indicators, but there is a little recommendation for researchers who wish to use it. This study aimed 1) to describe reporting of the Delphi method to develop quality indicators, 2) to discuss specific methodological skills for quality indicators selection 3) to give guidance about this practice.\n\n\nMETHODOLOGY AND MAIN FINDING\nThree electronic data bases were searched over a 30 years period (1978-2009). All articles that used the Delphi method to select quality indicators were identified. A standardized data extraction form was developed. Four domains (questionnaire preparation, expert panel, progress of the survey and Delphi results) were assessed. Of 80 included studies, quality of reporting varied significantly between items (9% for year's number of experience of the experts to 98% for the type of Delphi used). Reporting of methodological aspects needed to evaluate the reliability of the survey was insufficient: only 39% (31/80) of studies reported response rates for all rounds, 60% (48/80) that feedback was given between rounds, 77% (62/80) the method used to achieve consensus and 57% (48/80) listed quality indicators selected at the end of the survey. A modified Delphi procedure was used in 49/78 (63%) with a physical meeting of the panel members, usually between Delphi rounds. Median number of panel members was 17(Q1:11; Q3:31). In 40/70 (57%) studies, the panel included multiple stakeholders, who were healthcare professionals in 95% (38/40) of cases. Among 75 studies describing criteria to select quality indicators, 28 (37%) used validity and 17(23%) feasibility.\n\n\nCONCLUSION\nThe use and reporting of the Delphi method for quality indicators selection need to be improved. We provide some guidance to the investigators to improve the using and reporting of the method in future surveys.", "title": "" }, { "docid": "9c77080dbab62dc7a5ddafcde98d094c", "text": "A cornucopia of dimensionality reduction techniques have emerged over the past decade, leaving data analysts with a wide variety of choices for reducing their data. Means of evaluating and comparing low-dimensional embeddings useful for visualization, however, are very limited. When proposing a new technique it is common to simply show rival embeddings side-by-side and let human judgment determine which embedding is superior. This study investigates whether such human embedding evaluations are reliable, i.e., whether humans tend to agree on the quality of an embedding. We also investigate what types of embedding structures humans appreciate a priori. Our results reveal that, although experts are reasonably consistent in their evaluation of embeddings, novices generally disagree on the quality of an embedding. We discuss the impact of this result on the way dimensionality reduction researchers should present their results, and on applicability of dimensionality reduction outside of machine learning.", "title": "" }, { "docid": "c66fc0dbd8774fdb5fea3990985e65d7", "text": "Since 1985 various evolutionary approaches to multiobjective optimization have been developed, capable of searching for multiple solutions concurrently in a single run. But the few comparative studies of different methods available to date are mostly qualitative and restricted to two approaches. In this paper an extensive, quantitative comparison is presented, applying four multiobjective evolutionary algorithms to an extended ~0/1 knapsack problem. 1 I n t r o d u c t i o n Many real-world problems involve simultaneous optimization of several incommensurable and often competing objectives. Usually, there is no single optimal solution, but rather a set of alternative solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. They are known as Pareto-optimal solutions. Mathematically, the concept of Pareto-optimality can be defined as follows: Let us consider, without loss of generality, a multiobjective maximization problem with m parameters (decision variables) and n objectives: Maximize y = f (x ) = ( f l (x ) , f 2 ( x ) , . . . , f,~(x)) (1) where x = ( x l , x 2 , . . . , x m ) e X and y = ( y l , y 2 , . . . , y ~ ) E Y are tuple. A decision vector a E X is said to dominate a decision vector b E X (also written as a >-b) iff V i e { 1 , 2 , . . . , n } : l ~ ( a ) > _ f ~ ( b ) A ~ j e { 1 , 2 , . . . , n } : f j ( a ) > f j ( b ) (2) Additionally, in this study we say a covers b iff a ~b or a = b. All decision vectors which are not dominated by any other decision vector are called nondominated or Pareto-optimal. Often, there is a special interest in finding or approximating the Paretooptimal set, mainly to gain deeper insight into the problem and knowledge about alternate solutions, respectively. Evolutionary algorithms (EAs) seem to be especially suited for this task, because they process a set of solutions in parallel, eventually exploiting similarities of solutions by crossover. Some researcher suggest that multiobjective search and optimization might be a problem area where EAs do better than other blind search strategies [1][12]. Since the mid-eighties various multiob]ective EAs have been developed, capable of searching for multiple Pareto-optimal solutions concurrently in a single", "title": "" }, { "docid": "39ccd0efd846c2314da557b73a326e85", "text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.", "title": "" }, { "docid": "b9ca1209ce50bf527d68109dbdf7431c", "text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.", "title": "" } ]
scidocsrr
8cc22c2e569fdc5c1f7ed74dec3fff9a
FACTORIE: Probabilistic Programming via Imperatively Defined Factor Graphs
[ { "docid": "bc0def2cdcb570feaee55293cea0c97f", "text": "Inductive Logic Programming (ILP) is a new discipline which investigates the inductive construction of rst-order clausal theories from examples and background knowledge. We survey the most important theories and methods of this new eld. Firstly, various problem speciications of ILP are formalised in semantic settings for ILP, yielding a \\model-theory\" for ILP. Secondly, a generic ILP algorithm is presented. Thirdly, the inference rules and corresponding operators used in ILP are presented, resulting in a \\proof-theory\" for ILP. Fourthly, since inductive inference does not produce statements which are assured to follow from what is given, inductive inferences require an alternative form of justiication. This can take the form of either probabilistic support or logical constraints on the hypothesis language. Information compression techniques used within ILP are presented within a unifying Bayesian approach to connrmation and corroboration of hypotheses. Also, diierent ways to constrain the hypothesis language, or specify the declarative bias are presented. Fifthly, some advanced topics in ILP are addressed. These include aspects of computational learning theory as applied to ILP, and the issue of predicate invention. Finally, we survey some applications and implementations of ILP. ILP applications fall under two diierent categories: rstly scientiic discovery and knowledge acquisition, and secondly programming assistants.", "title": "" } ]
[ { "docid": "5fe43f0b23b0cfd82b414608e60db211", "text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.", "title": "" }, { "docid": "7b96cba9b115d842f0e6948434b40b37", "text": "A broadband printed microstrip antenna having cross polarization level >; 15 dB with improved gain in the entire frequency band is presented. Principle of stacking is implemented on a strip loaded slotted broadband patch antenna for enhancing the gain without affecting the broadband impedance matching characteristics and offsetting the position of the upper patch excites a lower resonance which enhances the bandwidth further. The antenna has a dimension of 42 × 55 × 4.8 mm3 when printed on a substrate of dielectric constant 4.2 and has a 2:1 VSWR bandwidth of 34.9%. The antenna exhibits a peak gain of 8.07 dBi and a good front to back ratio better than 12 dB is observed throughout the entire operating band. Simulated and experimental reflection characteristics of the antenna with and without stacking along with offset variation studies, radiation patterns and gain of the final antenna are presented.", "title": "" }, { "docid": "333b21433d17a9d271868e203c8a9481", "text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).", "title": "" }, { "docid": "faad1a2863986f31f26f1e261d75096a", "text": "Multilabel classification is rapidly developing as an important aspect of modern predictive modeling, motivating study of its theoretical aspects. To this end, we propose a framework for constructing and analyzing multilabel classification metrics which reveals novel results on a parametric form for population optimal classifiers, and additional insight into the role of label correlations. In particular, we show that for multilabel metrics constructed as instance-, microand macroaverages, the population optimal classifier can be decomposed into binary classifiers based on the marginal instance-conditional distribution of each label, with a weak association between labels via the threshold. Thus, our analysis extends the state of the art from a few known multilabel classification metrics such as Hamming loss, to a general framework applicable to many of the classification metrics in common use. Based on the population-optimal classifier, we propose a computationally efficient and general-purpose plug-in classification algorithm, and prove its consistency with respect to the metric of interest. Empirical results on synthetic and benchmark datasets are supportive of our theoretical findings.", "title": "" }, { "docid": "6356a0272b95ade100ad7ececade9e36", "text": "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.", "title": "" }, { "docid": "737231466c50ac647f247b60852026e2", "text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people are accessing key-based security systems. Existing methods of obtaining such secret information rely on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user’s hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 7,000 key entry traces collected from 20 adults for key-based security systems (i.e., ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80 percent accuracy with only one try and more than 90 percent accuracy with three tries. Moreover, the performance of our system is consistently good even under low sampling rate and when inferring long PIN sequences. To the best of our knowledge, this is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.", "title": "" }, { "docid": "436657862080e0c37966ddba3df0c4b5", "text": "Scholarly digital libraries increasingly provide analytics to information within documents themselves. This includes information about the logical document structure of use to downstream components, such as search, navigation, and summarization. In this paper, the authors describe SectLabel, a module that further develops existing software to detect the logical structure of a document from existing PDF files, using the formalism of conditional random fields. While previous work has assumed access only to the raw text representation of the document, a key aspect of this work is to integrate the use of a richer representation of the document that includes features from optical character recognition (OCR), such as font size and text position. Experiments reveal that using such rich features improves logical structure detection by a significant 9 F1 points, over a suitable baseline, motivating the use of richer document representations in other digital library applications. DOI: 10.4018/978-1-4666-0900-6.ch014", "title": "" }, { "docid": "9a1bb9370031cbe9b6b3175b216aeea5", "text": "The area of an image multi-label classification is increase continuously in last few years, in machine learning and computer vision. Multi-label classification has attracted significant attention from researchers and has been applied to an image annotation. In multi-label classification, each instance is assigned to multiple classes; it is a common problem in data analysis. In this paper, represent general survey on the research work is going on in the field of multi-label classification. Finally, paper is concluded towards challenges in multi-label classification for images for future research.", "title": "" }, { "docid": "cf8d4be65f988bd45dc56dc8dc3988d2", "text": "In this paper, we deal with several aspects related to the control of tendon-based actuation systems for robotic devices. In particular, the problems that are considered in this paper are related to the modeling, identification, and control of tendons sliding on curved pathways, subject to friction and viscoelastic effects. Tendons made in polymeric materials are considered, and therefore, hysteresis in the transmission system characteristic must be taken into account as an additional nonlinear effect because of the plasticity and creep phenomena typical of these materials. With the aim of reproducing these behaviors, a viscoelastic model is used to model the tendon compliance. Particular attention has been given to the friction effects arising from the interaction between the tendon pathway and the tendon itself. This phenomenon has been characterized by means of a LuGre-like dynamic friction model to consider the effects that cannot be reproduced by employing a static friction model. A specific setup able to measure the tendon's tension in different points along its path has been designed in order to verify the tension distribution and identify the proper parameters. Finally, a simple control strategy for the compensation of these nonlinear effects and the control of the force that is applied by the tendon to the load is proposed and experimentally verified.", "title": "" }, { "docid": "3072c5458a075e6643a7679ccceb1417", "text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.", "title": "" }, { "docid": "3d7eb095e68a9500674493ee58418789", "text": "Hundreds of scholarly studies have investigated various aspects of the immensely popular Wikipedia. Although a number of literature reviews have provided overviews of this vast body of research, none of them has specifically focused on the readers of Wikipedia and issues concerning its readership. In this systematic literature review, we review 99 studies to synthesize current knowledge regarding the readership of Wikipedia and also provide an analysis of research methods employed. The scholarly research has found that Wikipedia is popular not only for lighter topics such as entertainment, but also for more serious topics such as health information and legal background. Scholars, librarians and students are common users of Wikipedia, and it provides a unique opportunity for educating students in digital", "title": "" }, { "docid": "aadc952471ecd67d0c0731fa5a375872", "text": "As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA), there is increase demand for electrical power in the aircraft. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. Moreover, the structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Detailed descriptions of the modern MEA generation and distribution systems as well as the power converters and load types are explained and outlined. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system.", "title": "" }, { "docid": "1c80fdc30b2b37443367dae187fbb376", "text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.", "title": "" }, { "docid": "44f257275a36308ce088881fafc92d7c", "text": "Frauds related to the ATM (Automatic Teller Machine) are increasing day by day which is a serious issue. ATM security is used to provide protection against these frauds. Though security is provided for ATM machine, cases of robberies are increasing. Previous technologies provide security within machines for secure transaction, but machine is not neatly protected. The ATM machines are not safe since security provided traditionally were either by using RFID reader or by using security guard outside the ATM. This security is not sufficient because RFID card can be stolen and can be misused for robbery as well as watchman can be blackmailed by the thief. So there is a need to propose new technology which can overcome this problem. This paper proposes a system which aims to design real-time monitoring and controlling system. The system is implemented using Raspberry Pi and fingerprint module which make the system more secure, cost effective and stand alone. For controlling purpose, Embedded Web Server (EWS) is designed using Raspberry Pi which serves web page on which video footage of ATM center is seen and controlled. So the proposed system removes the drawback of manual controlling camera module and door also this system is stand alone and cost effective.", "title": "" }, { "docid": "65ecfef85ae09603afddde09a2c65bf4", "text": "We outline a representation for discrete multivariate distributions in terms of interventional potential functions that are globally normalized. This representation can be used to model the effects of interventions, and the independence properties encoded in this model can be represented as a directed graph that allows cycles. In addition to discussing inference and sampling with this representation, we give an exponential family parametrization that allows parameter estimation to be stated as a convex optimization problem; we also give a convex relaxation of the task of simultaneous parameter and structure learning using group `1regularization. The model is evaluated on simulated data and intracellular flow cytometry data.", "title": "" }, { "docid": "854d3759757b3e335dac88adbea9734c", "text": "Micro Hotplate (MHP) is the key component in micro-sensors particularly gas sensors. In this paper, we have presented the design and simulation results of a meander micro heater based on platinum material. A comparative study by simulating two different heater thicknesses has also been presented in this paper. The membrane size is 1.4mm × 1.6mm and a thickness of 1.4μm. Above the membrane, a platinum film was deposed with a size of 1.1 × 1.1 mm and a various thickness of 0.1 μm and 0.15 μm. Power consumption and temperature distribution were determined in the Platinum micro heater's structure over a supply voltage of 5, 6 and 7 V.", "title": "" }, { "docid": "2effb3276d577d961f6c6ad18a1e7b3e", "text": "This paper extends the recovery of structure and motion to im age sequences with several independently moving objects. The mot ion, structure, and camera calibration are all a-priori unknown. The fundamental constraint that we introduce is that multiple motions must share the same camer parameters. Existing work on independent motions has not employed this constr ai t, and therefore has not gained over independent static-scene reconstructi ons. We show how this constraint leads to several new results in st ructure and motion recovery, where Euclidean reconstruction becomes pos ible in the multibody case, when it was underconstrained for a static scene. We sho w how to combine motions of high-relief, low-relief and planar objects. Add itionally we show that structure and motion can be recovered from just 4 points in th e uncalibrated, fixed camera, case. Experiments on real and synthetic imagery demonstrate the v alidity of the theory and the improvement in accuracy obtained using multibody an alysis.", "title": "" }, { "docid": "8fa8e875a948aed94b7682b86fcbc171", "text": "Do teams show stable conflict interaction patterns that predict their performance hours, weeks, or even months in advance? Two studies demonstrate that two of the same patterns of emotional interaction dynamics that distinguish functional from dysfunctional marriages also distinguish high from low-performance design teams in the field, up to 6 months in advance, with up to 91% accuracy, and based on just 15minutes of interaction data: Group Affective Balance, the balance of positive to negative affect during an interaction, and Hostile Affect, the expression of a set of specific negative behaviors were both found as predictors of team performance. The research also contributes a novel method to obtain a representative sample of a team's conflict interaction. Implications for our understanding of design work in teams and for the design of groupware and feedback intervention systems are discussed.", "title": "" }, { "docid": "f5c4c25286eb419eb8f7100702062180", "text": "The primary objective of this investigation was to quantitatively identify which training variables result in the greatest strength and hypertrophy outcomes with lower body low intensity training with blood flow restriction (LI-BFR). Searches were performed for published studies with certain criteria. First, the primary focus of the study must have compared the effects of low intensity endurance or resistance training alone to low intensity exercise with some form of blood flow restriction. Second, subject populations had to have similar baseline characteristics so that valid outcome measures could be made. Finally, outcome measures had to include at least one measure of muscle hypertrophy. All studies included in the analysis utilized MRI except for two which reported changes via ultrasound. The mean overall effect size (ES) for muscle strength for LI-BFR was 0.58 [95% CI: 0.40, 0.76], and 0.00 [95% CI: −0.18, 0.17] for low intensity training. The mean overall ES for muscle hypertrophy for LI-BFR training was 0.39 [95% CI: 0.35, 0.43], and −0.01 [95% CI: −0.05, 0.03] for low intensity training. Blood flow restriction resulted in significantly greater gains in strength and hypertrophy when performed with resistance training than with walking. In addition, performing LI-BFR 2–3 days per week resulted in the greatest ES compared to 4–5 days per week. Significant correlations were found between ES for strength development and weeks of duration, but not for muscle hypertrophy. This meta-analysis provides insight into the impact of different variables on muscular strength and hypertrophy to LI-BFR training.", "title": "" }, { "docid": "a649a105b1d127c9c9ea2a9d4dad5d11", "text": "Given the size and confidence of pairwise local orderings, angular embedding (AE) finds a global ordering with a near-global optimal eigensolution. As a quadratic criterion in the complex domain, AE is remarkably robust to outliers, unlike its real domain counterpart LS, the least squares embedding. Our comparative study of LS and AE reveals that AE's robustness is due not to the particular choice of the criterion, but to the choice of representation in the complex domain. When the embedding is encoded in the angular space, we not only have a nonconvex error function that delivers robustness, but also have a Hermitian graph Laplacian that completely determines the optimum and delivers efficiency. The high quality of embedding by AE in the presence of outliers can hardly be matched by LS, its corresponding L1 norm formulation, or their bounded versions. These results suggest that the key to overcoming outliers lies not with additionally imposing constraints on the embedding solution, but with adaptively penalizing inconsistency between measurements themselves. AE thus significantly advances statistical ranking methods by removing the impact of outliers directly without explicit inconsistency characterization, and advances spectral clustering methods by covering the entire size-confidence measurement space and providing an ordered cluster organization.", "title": "" } ]
scidocsrr
ab09694ee248b8430aab5e77271eddfd
Coarse-to-Fine Description for Fine-Grained Visual Categorization
[ { "docid": "892661d87138d49aab2a54b7557a7021", "text": "Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the CaltechUCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.", "title": "" } ]
[ { "docid": "4d1ea9da68cc3498b413371f12c90433", "text": "Transfer Learning (TL) plays a crucial role when a given dataset has insufficient labeled examples to train an accurate model. In such scenarios, the knowledge accumulated within a model pre-trained on a source dataset can be transferred to a target dataset, resulting in the improvement of the target model. Though TL is found to be successful in the realm of imagebased applications, its impact and practical use in Natural Language Processing (NLP) applications is still a subject of research. Due to their hierarchical architecture, Deep Neural Networks (DNN) provide flexibility and customization in adjusting their parameters and depth of layers, thereby forming an apt area for exploiting the use of TL. In this paper, we report the results and conclusions obtained from extensive empirical experiments using a Convolutional Neural Network (CNN) and try to uncover thumb rules to ensure a successful positive transfer. In addition, we also highlight the flawed means that could lead to a negative transfer. We explore the transferability of various layers and describe the effect of varying hyper-parameters on the transfer performance. Also, we present a comparison of accuracy value and model size against state-of-the-art methods. Finally, we derive inferences from the empirical results and provide best practices to achieve a successful positive transfer.", "title": "" }, { "docid": "c366303728d2a8ee47fe4cbfe67dec24", "text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.", "title": "" }, { "docid": "849280927a79cee3f6580ec837d89797", "text": "BACKGROUND\nGlenohumeral pain and rotator cuff tendinopathy (RCT) are common musculoskeletal complaints with high prevalence among working populations. The primary proposed pathophysiologic mechanisms are sub-acromial RC tendon impingement and reduced tendon blood flow. Some sleep postures may increase subacromial pressure, potentially contributing to these postulated mechanisms. This study uses a large population of workers to investigate whether there is an association between preferred sleeping position and prevalence of: (1) shoulder pain, and (2) rotator cuff tendinopathy.\n\n\nMETHODS\nA cross-sectional analysis was performed on baseline data from a multicenter prospective cohort study. Participants were 761 workers who were evaluated by questionnaire using a body diagram to determine the presence of glenohumeral pain within 30 days prior to enrollment. The questionnaire also assessed primary and secondary preferred sleep position(s) using 6 labeled diagrams. All workers underwent a structured physical examination to determine whether RCT was present. For this study, the case definition of RCT was glenohumeral pain plus at least one of a positive supraspinatus test, painful arc and/or Neer's test. Prevalence of glenohumeral pain and RCT were individually calculated for the primary and secondary sleep postures and odds ratios were calculated.\n\n\nRESULTS\nAge, sex, Framingham cardiovascular risk score and BMI had significant associations with glenohumeral pain. For rotator cuff tendinopathy, increasing age, Framingham risk score and Hand Activity Level (HAL) showed significant associations. The sleep position anticipated to have the highest risk of glenohumeral pain and RCT was paradoxically associated with a decreased prevalence of glenohumeral pain and also trended toward being protective for RCT. Multivariable logistic regression showed no further significant associations.\n\n\nCONCLUSION\nThis cross-sectional study unexpectedly found a reduced association between one sleep posture and glenohumeral pain. This cross-sectional study may be potentially confounded, by participants who are prone to glenohumeral pain and RCT may have learned to avoid sleeping in the predisposing position. Longitudinal studies are needed to further evaluate a possible association between glenohumeral pain or RCT and sleep posture as a potential risk factor.", "title": "" }, { "docid": "907de88b781d58610b0a09313014017f", "text": "This study was conducted to determine the seroprevalence of antibodies against Newcastle disease virus (NDV), Chicken infectious anemia virus (CIAV) and Avian influenza virus (AIV) in indigenous chickens in Grenada, West Indies. Indigenous chickens are kept for eggs and meat for either domestic consumption or local sale. These birds are usually kept in the backyard of the house with little or no shelter. The mean size of the flock per household was 14 birds (range 5-40 birds). Blood was collected from 368 birds from all the six parishes of Grenada and serum samples were tested for antibodies against NDV, CIAV and AIV using commercial enzyme-linked immunosorbent assay (ELISA) kits. The seroprevalence of antibodies against NDV, CIA and AI was 66.3% (95% CI; 61.5% to 71.1%), 59.5% (95% CI; 54.4% to 64.5%) and 10.3% (95% CI; 7.2% to 13.4%), respectively. Since indigenous chickens in Grenada are not vaccinated against poultry pathogens, these results indicate exposure of chickens to NDV, AIV and CIAV Indigenous chickens are thus among the risk factors acting as vectors of pathogens that can threaten commercial poultry and other avian species in Grenada", "title": "" }, { "docid": "10a0f370ad3e9c3d652e397860114f90", "text": "Statistical data associated with geographic regions is nowadays globally available in large amounts and hence automated methods to visually display these data are in high demand. There are several well-established thematic map types for quantitative data on the ratio-scale associated with regions: choropleth maps, cartograms, and proportional symbol maps. However, all these maps suffer from limitations, especially if large data values are associated with small regions. To overcome these limitations, we propose a novel type of quantitative thematic map, the necklace map. In a necklace map, the regions of the underlying two-dimensional map are projected onto intervals on a one-dimensional curve (the necklace) that surrounds the map regions. Symbols are scaled such that their area corresponds to the data of their region and placed without overlap inside the corresponding interval on the necklace. Necklace maps appear clear and uncluttered and allow for comparatively large symbol sizes. They visualize data sets well which are not proportional to region sizes. The linear ordering of the symbols along the necklace facilitates an easy comparison of symbol sizes. One map can contain several nested or disjoint necklaces to visualize clustered data. The advantages of necklace maps come at a price: the association between a symbol and its region is weaker than with other types of maps. Interactivity can help to strengthen this association if necessary. We present an automated approach to generate necklace maps which allows the user to interactively control the final symbol placement. We validate our approach with experiments using various data sets and maps.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "71c6c714535ae1bfd749cbb8bbb34f5e", "text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.", "title": "" }, { "docid": "da5ad61c492419515e8449b435b42e80", "text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.", "title": "" }, { "docid": "b08f67bc9b84088f8298b35e50d0b9c5", "text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.", "title": "" }, { "docid": "ce3cd1edffb0754e55658daaafe18df6", "text": "Fact finders in legal trials often need to evaluate a mass of weak, contradictory and ambiguous evidence. There are two general ways to accomplish this task: by holistically forming a coherent mental representation of the case, or by atomistically assessing the probative value of each item of evidence and integrating the values according to an algorithm. Parallel constraint satisfaction (PCS) models of cognitive coherence posit that a coherent mental representation is created by discounting contradicting evidence, inflating supporting evidence and interpreting ambivalent evidence in a way coherent with the emerging decision. This leads to inflated support for whichever hypothesis the fact finder accepts as true. Using a Bayesian network to model the direct dependencies between the evidence, the intermediate hypotheses and the main hypothesis, parameterised with (conditional) subjective probabilities elicited from the subjects, I demonstrate experimentally how an atomistic evaluation of evidence leads to a convergence of the computed posterior degrees of belief in the guilt of the defendant of those who convict and those who acquit. The atomistic evaluation preserves the inherent uncertainty that largely disappears in a holistic evaluation. Since the fact finders’ posterior degree of belief in the guilt of the defendant is the relevant standard of proof in many legal systems, this result implies that using an atomistic evaluation of evidence, the threshold level of posterior belief in guilt required for a conviction may often not be reached. ⃰ Max Planck Institute for Research on Collective Goods, Bonn", "title": "" }, { "docid": "be3e02812e35000b39e4608afc61f229", "text": "The growing use of control access systems based on face recognition shed light over the need for even more accurate systems to detect face spoofing attacks. In this paper, an extensive analysis on face spoofing detection works published in the last decade is presented. The analyzed works are categorized by their fundamental parts, i.e., descriptors and classifiers. This structured survey also brings a comparative performance analysis of the works considering the most important public data sets in the field. The methodology followed in this work is particularly relevant to observe temporal evolution of the field, trends in the existing approaches, Corresponding author: Luciano Oliveira, tel. +55 71 3283-9472 Email addresses: luiz.otavio@ufba.br (Luiz Souza), lrebouca@ufba.br (Luciano Oliveira), mauricio@dcc.ufba.br (Mauricio Pamplona), papa@fc.unesp.br (Joao Papa) to discuss still opened issues, and to propose new perspectives for the future of face spoofing detection.", "title": "" }, { "docid": "28d8be0cd581a9696c533b457ceb6628", "text": "Nowadays, people usually participate in multiple social networks simultaneously, e.g., Facebook and Twitter. Formally, the correspondences of the accounts that belong to the same user are defined as anchor links, and the networks aligned by anchor links can be denoted as aligned networks. In this paper, we study the problem of anchor link prediction (ALP) across a pair of aligned networks based on social network structure. First, three similarity metrics (CPS, CCS, and CPS+) are proposed. Different from the previous works, we focus on the theoretical guarantees of our metrics. We prove mathematically that the node pair with the maximum CPS or CPS+ should be an anchor link with high probability and a correctly predicted anchor link must have a high value of CCS. Second, using the CPS+ and CCS, we present a two-stage iterative algorithm CPCC to solve the problem of the ALP. More specifically, we present an early termination strategy to make a tradeoff between precision and recall. At last, a series of experiments are conducted on both synthetic and real-world social networks to demonstrate the effectiveness of the CPCC.", "title": "" }, { "docid": "80b3337b5a0161990358bd9da0119471", "text": "In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform stateof-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time.", "title": "" }, { "docid": "344db754658e580ea441c44987b09286", "text": "Online learning to rank for information retrieval (IR) holds promise for allowing the development of \"self-learning\" search engines that can automatically adjust to their users. With the large amount of e.g., click data that can be collected in web search settings, such techniques could enable highly scalable ranking optimization. However, feedback obtained from user interactions is noisy, and developing approaches that can learn from this feedback quickly and reliably is a major challenge.\n In this paper we investigate whether and how previously collected (historical) interaction data can be used to speed up learning in online learning to rank for IR. We devise the first two methods that can utilize historical data (1) to make feedback available during learning more reliable and (2) to preselect candidate ranking functions to be evaluated in interactions with users of the retrieval system. We evaluate both approaches on 9 learning to rank data sets and find that historical data can speed up learning, leading to substantially and significantly higher online performance. In particular, our pre-selection method proves highly effective at compensating for noise in user feedback. Our results show that historical data can be used to make online learning to rank for IR much more effective than previously possible, especially when feedback is noisy.", "title": "" }, { "docid": "e50842fc8438af7fe6ce4b6d9a5439a7", "text": "OBJECTIVE\nTimely recognition and optimal management of atherogenic dyslipidemia (AD) and residual vascular risk (RVR) in family medicine.\n\n\nBACKGROUND\nThe global increase of the incidence of obesity is accompanied by an increase in the incidence of many metabolic and lipoprotein disorders, in particular AD, as an typical feature of obesity, metabolic syndrome, insulin resistance and diabetes type 2. AD is an important factor in cardio metabolic risk, and is characterized by a lipoprotein profile with low levels of high-density lipoprotein (HDL), high levels of triglycerides (TG) and high levels of low-density lipoprotein (LDL) cholesterol. Standard cardiometabolic risk assessment using the Framingham risk score and standard treatment with statins is usually sufficient, but not always that effective, because it does not reduce RVR that is attributed to elevated TG and reduced HDL cholesterol. RVR is subject to reduction through lifestyle changes or by pharmacological interventions. In some studies it was concluded that dietary interventions should aim to reduce the intake of calories, simple carbohydrates and saturated fats, with the goal of reaching cardiometabolic suitability, rather than weight reduction. Other studies have found that the reduction of carbohydrates in the diet or weight loss can alleviate AD changes, while changes in intake of total or saturated fat had no significant influence. In our presented case, a lifestyle change was advised as a suitable diet with reduced intake of carbohydrates and a moderate physical activity of walking for at least 180 minutes per week, with an recommendation for daily intake of calories alignment with the total daily (24-hour) energy expenditure (24-EE), depending on the degree of physical activity, type of food and the current health condition. Such lifestyle changes together with combined medical therapy with Statins, Fibrates and Omega-3 fatty acids, resulted in significant improvement in atherogenic lipid parameters.\n\n\nCONCLUSION\nUnsuitable atherogenic nutrition and insufficient physical activity are the new risk factors characteristic for AD. Nutritional interventions such as diet with reduced intake of carbohydrates and calories, moderate physical activity, combined with pharmacotherapy can improve atherogenic dyslipidemic profile and lead to loss of weight. Although one gram of fat release twice more kilo calories compared to carbohydrates, carbohydrates seems to have a greater atherogenic potential, which should be explored in future.", "title": "" }, { "docid": "bf338661988fd28c9bafe7ea1ca59f34", "text": "We propose a system for landing unmanned aerial vehicles (UAV), specifically an autonomous rotorcraft, in uncontrolled, arbitrary, terrains. We present plans for and progress on a vision-based system for the recovery of the geometry and material properties of local terrain from a mounted stereo rig for the purposes of finding an optimal landing site. A system is developed which integrates motion estimation from tracked features, and an algorithm for approximate estimation of a dense elevation map in a world coordinate system.", "title": "" }, { "docid": "00fa68c8e80e565c6fc4e0fdf053bac8", "text": "This work partially reports the results of a study aiming at the design and analysis of the performance of a multi-cab metropolitan transportation system. In our model we investigate a particular multi-vehicle many-to-many dynamic request dial-a-ride problem. We present a heuristic algorithm for this problem and some preliminary results. The algorithm is based on iteratively solving a singlevehicle subproblem at optimality: a pretty efficient dynamic programming routine has been devised for this purpose. This work has been carried out by researchers from both University of Rome “Tor Vergata” and Italian Energy Research Center ENEA as a line of a reasearch program, regarding urban mobility optimization, funded by ENEA and the Italian Ministry of Environment.", "title": "" }, { "docid": "7431ee071307189e58b5c7a9ce3a2189", "text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.", "title": "" }, { "docid": "e19ca2e4f2dbf4bd808f2f7a1a4aba18", "text": "BACKGROUND\nCurrent ventricular assist devices (VADs) in the United States are designed primarily for adult use. Data on VADs as a bridge to transplantation in children are limited.\n\n\nMETHODS AND RESULTS\nA multi-institutional, prospectively maintained database of outcomes in children after listing for heart transplantation (n=2375) was used to analyze outcomes of VAD patients (n=99, 4%) listed between January 1993 and December 2003. Median age at VAD implantation was 13.3 years (range, 2 days to 17.9 years); diagnoses were cardiomyopathy (78%) and congenital heart disease (22%). Mean duration of support was 57 days (range, 1 to 465 days). Seventy-three percent were supported with a long-term device, with 39% requiring biventricular support. Seventy-seven patients (77%) survived to transplantation, 5 patients were successfully weaned from support and recovered, and 17 patients (17%) died on support. In the recent era (2000 to 2003), successful bridge to transplantation with VAD was achieved in 86% of patients. Peak hazard for death while waiting was the first 2 weeks after VAD placement. Risk factors for death while awaiting a transplant included earlier era of implantation (P=0.05), female gender (P=0.02), and congenital disease diagnosis (P=0.05). There was no difference in 5-year survival after transplantation for patients on VAD at time of transplantation as compared with those not requiring VAD.\n\n\nCONCLUSIONS\nVAD support in children successfully bridged 77% of patients to transplantation, with posttransplantation outcomes comparable to those not requiring VAD. These encouraging results emphasize the need to further understand patient selection and to delineate the impact of VAD technology for children.", "title": "" } ]
scidocsrr
b384cee62a8454cf87dd629f010d7dc5
Deep Active Learning for Civil Infrastructure Defect Detection and Classification
[ { "docid": "44ffac24ef4d30a8104a2603bb1cdcb1", "text": "Most object detectors contain two important components: a feature extractor and an object classifier. The feature extractor has rapidly evolved with significant research efforts leading to better deep convolutional architectures. The object classifier, however, has not received much attention and many recent systems (like SPPnet and Fast/Faster R-CNN) use simple multi-layer perceptrons. This paper demonstrates that carefully designing deep networks for object classification is just as important. We experiment with region-wise classifier networks that use shared, region-independent convolutional features. We call them “Networks on Convolutional feature maps” (NoCs). We discover that aside from deep feature maps, a deep and convolutional per-region classifier is of particular importance for object detection, whereas latest superior image classification models (such as ResNets and GoogLeNets) do not directly lead to good detection accuracy without using such a per-region classifier. We show by experiments that despite the effective ResNets and Faster R-CNN systems, the design of NoCs is an essential element for the 1st-place winning entries in ImageNet and MS COCO challenges 2015.", "title": "" } ]
[ { "docid": "43919b011f7d65d82d03bb01a5e85435", "text": "Self-inflicted burns are regularly admitted to burns units worldwide. Most of these patients are referred to psychiatric services and are successfully treated however some return to hospital with recurrent self-inflicted burns. The aim of this study is to explore the characteristics of the recurrent self-inflicted burn patients admitted to the Royal North Shore Hospital during 2004-2011. Burn patients were drawn from a computerized database and recurrent self-inflicted burn patients were identified. Of the total of 1442 burn patients, 40 (2.8%) were identified as self-inflicted burns. Of these patients, 5 (0.4%) were identified to have sustained previous self-inflicted burns and were interviewed by a psychiatrist. Each patient had been diagnosed with a borderline personality disorder and had suffered other forms of deliberate self-harm. Self-inflicted burns were utilized to relieve or help regulate psychological distress, rather than to commit suicide. Most patients had a history of emotional neglect, physical and/or sexual abuse during their early life experience. Following discharge from hospital, the patients described varying levels of psychiatric follow-up, from a post-discharge review at a local community mental health centre to twice-weekly psychotherapy. The patients who engaged in regular psychotherapy described feeling more in control of their emotions and reported having a longer period of abstinence from self-inflicted burn. Although these patients represent a small proportion of all burns, the repeat nature of their injuries led to a significant use of clinical resources. A coordinated and consistent treatment pathway involving surgical and psychiatric services for recurrent self-inflicted burns may assist in the management of these challenging patients.", "title": "" }, { "docid": "186ba2180a44b8a4a52ffba6f46751c4", "text": "Affective characteristics are crucial factors that influence human behavior, and often, the prevalence of either emotions or reason varies on each individual. We aim to facilitate the development of agents’ reasoning considering their affective characteristics. We first identify core processes in an affective BDI agent, and we integrate them into an affective agent architecture (GenIA3). These tasks include the extension of the BDI agent reasoning cycle to be compliant with the architecture, the extension of the agent language (Jason) to support affect-based reasoning, and the adjustment of the equilibrium between the agent’s affective and rational sides.", "title": "" }, { "docid": "a1530b82b61fc6fc8eceb083fc394e9b", "text": "The performance of any algorithm will largely depend on the setting of its algorithm-dependent parameters. The optimal setting should allow the algorithm to achieve the best performance for solving a range of optimization problems. However, such parameter tuning itself is a tough optimization problem. In this paper, we present a framework for self-tuning algorithms so that an algorithm to be tuned can be used to tune the algorithm itself. Using the firefly algorithm as an example, we show that this framework works well. It is also found that different parameters may have different sensitivities and thus require different degrees of tuning. Parameters with high sensitivities require fine-tuning to achieve optimality.", "title": "" }, { "docid": "2e9d0bf42b8bb6eb8752e89eb46f2fc5", "text": "What is the growth pattern of social networks, like Facebook and WeChat? Does it truly exhibit exponential early growth, as predicted by textbook models like the Bass model, SI, or the Branching Process? How about the count of links, over time, for which there are few published models?\n We examine the growth of several real networks, including one of the world's largest online social network, ``WeChat'', with 300 million nodes and 4.75 billion links by 2013; and we observe power law growth for both nodes and links, a fact that completely breaks the sigmoid models (like SI, and Bass). In its place, we propose NETTIDE, along with differential equations for the growth of the count of nodes, as well as links. Our model accurately fits the growth patterns of real graphs; it is general, encompassing as special cases all the known, traditional models (including Bass, SI, log-logistic growth); while still remaining parsimonious, requiring only a handful of parameters. Moreover, our NETTIDE for link growth is the first one of its kind, accurately fitting real data, and naturally leading to the densification phenomenon. We validate our model with four real, time-evolving social networks, where NETTIDE gives good fitting accuracy, and, more importantly, applied on the WeChat data, our NETTIDE forecasted more than 730 days into the future, with 3% error.", "title": "" }, { "docid": "7fcd8eee5f2dccffd3431114e2b0ed5a", "text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.", "title": "" }, { "docid": "7499f88de9d2f76008dc38e96b08ca0a", "text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.", "title": "" }, { "docid": "257c9fda9808cb173e3b22f927864c21", "text": "Salesforce.com has recently completed an agile transformation of a two hundred person team within a three month window. This is one of the largest and fastest \"big-bang\" agile rollouts. This experience report discusses why we chose to move to an agile process, how we accomplished the transformation and what we learned from applying agile at scale.", "title": "" }, { "docid": "3bdd30d2c6e63f2e5540757f1db878b6", "text": "The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users’ engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of 4,709 intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users’ aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information. ∗Corresponding author General Terms Misinformation, Virality, Attention Patterns", "title": "" }, { "docid": "78cf38ee62d5501c3119552cb70b0997", "text": "This document discusses the status of research on detection and prevention of financial fraud undertaken as part of the IST European Commission funded FF POIROT (Financial Fraud Prevention Oriented Information Resources Using Ontology Technology) project. A first task has been the specification of the user requirements that define the functionality of the financial fraud ontology to be designed by the FF POIROT partners. It is claimed here that modeling fraudulent activity involves a mixture of law and facts as well as inferences about facts present, facts presumed or facts missing. The purpose of this paper is to explain this abstract model and to specify the set of user requirements.", "title": "" }, { "docid": "512ecda05fae6cb333c89833c489dbff", "text": "This review examines protein complexes in the Brookhaven Protein Databank to gain a better understanding of the principles governing the interactions involved in protein-protein recognition. The factors that influence the formation of protein-protein complexes are explored in four different types of protein-protein complexes--homodimeric proteins, heterodimeric proteins, enzyme-inhibitor complexes, and antibody-protein complexes. The comparison between the complexes highlights differences that reflect their biological roles.", "title": "" }, { "docid": "0fd147227c10a243f4209ffc1295d279", "text": "Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture", "title": "" }, { "docid": "08fdb69b893ee37285a98fc447b9748e", "text": "We introduce a novel robust hybrid 3D face tracking framework from RGBD video streams, which is capable of tracking head pose and facial actions without pre-calibration or intervention from a user. In particular, we emphasize on improving the tracking performance in instances where the tracked subject is at a large distance from the cameras, and the quality of point cloud deteriorates severely. This is accomplished by the combination of a flexible 3D shape regressor and the joint 2D+3D optimization on shape parameters. Our approach fits facial blendshapes to the point cloud of the human head, while being driven by an efficient and rapid 3D shape regressor trained on generic RGB datasets. As an on-line tracking system, the identity of the unknown user is adapted on-the-fly resulting in improved 3D model reconstruction and consequently better tracking performance. The result is a robust RGBD face tracker capable of handling a wide range of target scene depths, whose performances are demonstrated in our extensive experiments better than those of the state-of-the-arts.", "title": "" }, { "docid": "9cb832657be4d4d80682c1a49249a319", "text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: Marielle.Christiansen@iot.ntnu.no This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "bee25514d15321f4f0bdcf867bb07235", "text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.", "title": "" }, { "docid": "046207a87b7b01f6bc12f08a195670b9", "text": "Text normalization is the task of transforming lexical variants to their canonical forms. We model the problem of text normalization as a character-level sequence to sequence learning problem and present a neural encoder-decoder model for solving it. To train the encoder-decoder model, many sentences pairs are generally required. However, Japanese non-standard canonical pairs are scarce in the form of parallel corpora. To address this issue, we propose a method of data augmentation to increase data size by converting existing resources into synthesized non-standard forms using handcrafted rules. We conducted an experiment to demonstrate that the synthesized corpus contributes to stably train an encoder-decoder model and improve the performance of Japanese text normalization.", "title": "" }, { "docid": "54bf44e04920bdaa7388dbbbbd34a1a8", "text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.", "title": "" }, { "docid": "9ba6656cb67dcb72d4ebadcaf9450f40", "text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.", "title": "" }, { "docid": "516ef94fad7f7e5801bf1ef637ffb136", "text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1", "title": "" }, { "docid": "00e315b8baf0ce6548ec7139c8ce105c", "text": "We revisit the well-known problem of boolean group testing which attempts to discover a sparse subset of faulty items in a large set of mostly good items using a small number of pooled (or grouped) tests. This problem originated during the second WorldWar, and has been the subject of active research during the 70's, and 80's. Recently, there has been a resurgence of interest due to the striking parallels between group testing and the now highly popular field of compressed sensing. In fact, boolean group testing is nothing but compressed sensing in a different algebra - with boolean `AND' and `OR' operations replacing vector space multiplication and addition. In this paper we review existing solutions for non-adaptive (batch) group testing and propose a linear programming relaxation solution, which has a resemblance to the basis pursuit algorithm for sparse recovery in linear models. We compare its performance to alternative methods for group testing.", "title": "" } ]
scidocsrr
c89b73da9165dc72761c751635e2c6ae
Defending Web Servers with Feints, Distraction and Obfuscation
[ { "docid": "90bb9a4740e9fa028932b68a34717b43", "text": "Recently, the increase of interconnectivity has led to a rising amount of IoT enabled devices in botnets. Such botnets are currently used for large scale DDoS attacks. To keep track with these malicious activities, Honeypots have proven to be a vital tool. We developed and set up a distributed and highly-scalable WAN Honeypot with an attached backend infrastructure for sophisticated processing of the gathered data. For the processed data to be understandable we designed a graphical frontend that displays all relevant information that has been obtained from the data. We group attacks originating in a short period of time in one source as sessions. This enriches the data and enables a more in-depth analysis. We produced common statistics like usernames, passwords, username/password combinations, password lengths, originating country and more. From the information gathered, we were able to identify common dictionaries used for brute-force login attacks and other more sophisticated statistics like login attempts per session and attack efficiency.", "title": "" }, { "docid": "fb7807c7f28d0e768b6a8570d89b3b02", "text": "This paper presents a summary of research findings for a new reacitve phishing investigative technique using Web bugs and honeytokens. Phishing has become a rampant problem in today 's society and has cost financial institutions millions of dollars per year. Today's reactive techniques against phishing usually involve methods that simply minimize the damage rather than attempting to actually track down a phisher. Our research objective is to track down a phisher to the IP address of the phisher's workstation rather than innocent machines used as intermediaries. By using Web bugs and honeytokens on the fake Web site forms the phisher presents, one can log accesses to the honeytokens by the phisher when the attacker views the results of the forms. Research results to date are presented in this paper", "title": "" }, { "docid": "4c165c15a3c6f069f702a54d0dab093c", "text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.", "title": "" } ]
[ { "docid": "5efebde0526dbb7015ecef066b76d1a9", "text": "Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. However, most of the work in this direction has been confined to tasks such as teleoperation, simulation or explication of individual actions of a robot. In this paper, we will discuss how the capability to project intentions affect the task planning capabilities of a robot. Specifically, we will start with a discussion on how projection actions can be used to reveal information regarding the future intentions of the robot at the time of task execution. We will then pose a new planning paradigm - projection-aware planning - whereby a robot can trade off its plan cost with its ability to reveal its intentions using its projection actions. We will demonstrate each of these scenarios with the help of a joint human-robot activity using the HoloLens.", "title": "" }, { "docid": "5da030b3e27cae63acd86c7fb9c4153d", "text": "This work deals with the design and implementation prototype of a real time maximum power point tracker (MPPT) for photovoltaic panel (PV), aiming to improve energy transfer efficiency. This paper describes also the charging process of lead- acid batteries integrating the MPPT algorithm making an charging autonomous system that can be used to feed any autonomous application. The photovoltaic system exhibits a non-linear i-v characteristic and its maximum power point varies with solar insolation and temperature. To control the maximum transfer power from a PV panel the Perturbation and Observation (P&O) MPPT algorithm is executed by a simple microcontroller ATMEL ATTINY861V using the PV voltage and current information and controlling the duty cycle of a pulse width modulation (PWM) signal applied in to a DC/DC converter. The schematic and design of the single-ended primary inductance converter (SEPIC) is presented. This DC/DC converter is chosen because the input voltage can be higher or lower than the output voltage witch presents obvious design advantages. With the P&O MPPT algorithm implemented and executed by the microcontroller, the different charging stages of a lead-acid battery are described, showed and executed Finally, experimental results of the performance of the designed P&O MPPT algorithm are presented and compared with the results achieved with the direct connection of the PV panel to the battery.", "title": "" }, { "docid": "0dd0f44e59c1ee1e04d1e675dfd0fd9c", "text": "An important first step to successful global marketing is to understand the similarities and dissimilarities of values between cultures. This task is particularly daunting for companies trying to do business with China because of the scarcity of research-based information. This study uses updated values of Hofstede’s (1980) cultural model to compare the effectiveness of Pollay’s advertising appeals between the U.S. and China. Nine of the twenty hypotheses predicting effective appeals based on cultural dimensions were supported. An additional hypothesis was significant, but in the opposite direction as predicted. These findings suggest that it would be unwise to use Hofstede’s cultural dimensions as a sole predictor for effective advertising appeals. The Hofstede dimensions may lack the currency and fine grain necessary to effectively predict the success of the various advertising appeals. Further, the effectiveness of advertising appeals may be moderated by other factors, such as age, societal trends, political-legal environment and product usage.", "title": "" }, { "docid": "273bb44ed02076008d5d2835baed9494", "text": "Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.", "title": "" }, { "docid": "79cffed53f36d87b89577e96a2b2e713", "text": "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "title": "" }, { "docid": "bf0d5ee15b213c47d9d4a6a95d19e14a", "text": "We propose a new objective for network research: to build a fundamentally different sort of network that can assemble itself given high level instructions, reassemble itself as requirements change, automatically discover when something goes wrong, and automatically fix a detected problem or explain why it cannot do so.We further argue that to achieve this goal, it is not sufficient to improve incrementally on the techniques and algorithms we know today. Instead, we propose a new construct, the Knowledge Plane, a pervasive system within the network that builds and maintains high-level models of what the network is supposed to do, in order to provide services and advice to other elements of the network. The knowledge plane is novel in its reliance on the tools of AI and cognitive systems. We argue that cognitive techniques, rather than traditional algorithmic approaches, are best suited to meeting the uncertainties and complexity of our objective.", "title": "" }, { "docid": "79fdfee8b42fe72a64df76e64e9358bc", "text": "An algorithm is described to solve multiple-phase optimal control problems using a recently developed numerical method called the Gauss pseudospectral method. The algorithm is well suited for use in modern vectorized programming languages such as FORTRAN 95 and MATLAB. The algorithm discretizes the cost functional and the differential-algebraic equations in each phase of the optimal control problem. The phases are then connected using linkage conditions on the state and time. A large-scale nonlinear programming problem (NLP) arises from the discretization and the significant features of the NLP are described in detail. A particular reusable MATLAB implementation of the algorithm, called GPOPS, is applied to three classical optimal control problems to demonstrate its utility. The algorithm described in this article will provide researchers and engineers a useful software tool and a reference when it is desired to implement the Gauss pseudospectral method in other programming languages.", "title": "" }, { "docid": "bb98b9a825a4c7d0f3d4b06fafb8ff37", "text": "The tremendous evolution of programmable graphics hardware has made high-quality real-time volume graphics a reality. In addition to the traditional application of rendering volume data in scientific visualization, the interest in applying these techniques for real-time rendering of atmospheric phenomena and participating media such as fire, smoke, and clouds is growing rapidly. This course covers both applications in scientific visualization, e.g., medical volume data, and real-time rendering, such as advanced effects and illumination in computer games, in detail. Course participants will learn techniques for harnessing the power of consumer graphics hardware and high-level shading languages for real-time rendering of volumetric data and effects. Beginning with basic texture-based approaches including hardware ray casting, the algorithms are improved and expanded incrementally, covering local and global illumination, scattering, pre-integration, implicit surfaces and non-polygonal isosurfaces, transfer function design, volume animation and deformation, dealing with large volumes, high-quality volume clipping, rendering segmented volumes, higher-order filtering, and non-photorealistic volume rendering. Course participants are provided with documented source code covering details usually omitted in publications.", "title": "" }, { "docid": "da61794b9ffa1f6f4bc39cef9655bf77", "text": "This manuscript analyzes the effects of design parameters, such as aspect ratio, doping concentration and bias, on the performance of a general CMOS Hall sensor, with insight on current-related sensitivity, power consumption, and bandwidth. The article focuses on rectangular-shaped Hall probes since this is the most general geometry leading to shape-independent results. The devices are analyzed by means of 3D-TCAD simulations embedding galvanomagnetic transport model, which takes into account the Lorentz force acting on carriers due to a magnetic field. Simulation results define a set of trade-offs and design rules that can be used by electronic designers to conceive their own Hall probes.", "title": "" }, { "docid": "34bbc3054be98f2cc0edc25a00fe835d", "text": "The increasing prevalence of co-processors such as the Intel Xeon Phi, has been reshaping the high performance computing (HPC) landscape. The Xeon Phi comes with a large number of power efficient CPU cores, but at the same time, it's a highly memory constraint environment leaving the task of memory management entirely up to application developers. To reduce programming complexity, we are focusing on application transparent, operating system (OS) level hierarchical memory management.\n In particular, we first show that state of the art page replacement policies, such as approximations of the least recently used (LRU) policy, are not good candidates for massive many-cores due to their inherent cost of remote translation lookaside buffer (TLB) invalidations, which are inevitable for collecting page usage statistics. The price of concurrent remote TLB invalidations grows rapidly with the number of CPU cores in many-core systems and outpace the benefits of the page replacement algorithm itself. Building upon our previous proposal, per-core Partially Separated Page Tables (PSPT), in this paper we propose Core-Map Count based Priority (CMCP) page replacement policy, which exploits the auxiliary knowledge of the number of mapping CPU cores of each page and prioritizes them accordingly. In turn, it can avoid TLB invalidations for page usage statistic purposes altogether. Additionally, we describe and provide an implementation of the experimental 64kB page support of the Intel Xeon Phi and reveal some intriguing insights regarding its performance. We evaluate our proposal on various applications and find that CMCP can outperform state of the art page replacement policies by up to 38%. We also show that the choice of appropriate page size depends primarily on the degree of memory constraint in the system.", "title": "" }, { "docid": "a448b5e4e4bd017049226f06ce32fa9d", "text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.", "title": "" }, { "docid": "5fcda05ef200cd326ecb9c2412cf50b3", "text": "OBJECTIVE\nPalpable lymph nodes are common due to the reactive hyperplasia of lymphatic tissue mainly connected with local inflammatory process. Differential diagnosis of persistent nodular change on the neck is different in children, due to higher incidence of congenital abnormalities and infectious diseases and relative rarity of malignancies in that age group. The aim of our study was to analyse the most common causes of childhood cervical lymphadenopathy and determine of management guidelines on the basis of clinical examination and ultrasonographic evaluation.\n\n\nMATERIAL AND METHODS\nThe research covered 87 children with cervical lymphadenopathy. Age, gender and accompanying diseases of the patients were assessed. All the patients were diagnosed radiologically on the basis of ultrasonographic evaluation.\n\n\nRESULTS\nReactive inflammatory changes of bacterial origin were observed in 50 children (57.5%). Fever was the most common general symptom accompanying lymphadenopathy and was observed in 21 cases (24.1%). The ultrasonographic evaluation revealed oval-shaped lymph nodes with the domination of long axis in 78 patients (89.66%). The proper width of hilus and their proper vascularization were observed in 75 children (86.2%). Some additional clinical and laboratory tests were needed in the patients with abnormal sonographic image.\n\n\nCONCLUSIONS\nUltrasonographic imaging is extremely helpful in diagnostics, differentiation and following the treatment of childhood lymphadenopathy. Failure of regression after 4-6 weeks might be an indication for a diagnostic biopsy.", "title": "" }, { "docid": "8c9c9ad5e3d19b56a096e519cc6e3053", "text": "Cebocephaly and sirenomelia are uncommon birth defects. Their association is extremely rare; however, the presence of spina bifida with both conditions is not unexpected. We report on a female still-birth with cebocephaly, alobar holoprosencephaly, cleft palate, lumbar spina bifida, sirenomelia, a single umbilical artery, and a 46,XX karyotype, but without maternal diabetes mellitus. Our case adds to the examples of overlapping cephalic and caudal defects, possibly related to vulnerability of the midline developmental field or axial mesodermal dysplasia spectrum.", "title": "" }, { "docid": "f945b645e492e2b5c6c2d2d4ea6c57ae", "text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.", "title": "" }, { "docid": "68689ad05be3bf004120141f0534fd2b", "text": "A group of 156 first year medical students completed measures of emotional intelligence (EI) and physician empathy, and a scale assessing their feelings about a communications skills course component. Females scored significantly higher than males on EI. Exam performance in the autumn term on a course component (Health and Society) covering general issues in medicine was positively and significantly related to EI score but there was no association between EI and exam performance later in the year. High EI students reported more positive feelings about the communication skills exercise. Females scored higher than males on the Health and Society component in autumn, spring and summer exams. Structural equation modelling showed direct effects of gender and EI on autumn term exam performance, but no direct effects other than previous exam performance on spring and summer term performance. EI also partially mediated the effect of gender on autumn term exam performance. These findings provide limited evidence for a link between EI and academic performance for this student group. More extensive work on associations between EI, academic success and adjustment throughout medical training would clearly be of interest. 2005 Elsevier Ltd. All rights reserved. 0191-8869/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.04.014 q Ethical approval from the College of Medicine and Veterinary Medicine was sought and received for this investigation. Student information was gathered and used in accordance with the Data Protection Act. * Corresponding author. Tel.: +44 131 65", "title": "" }, { "docid": "848d1bcf05598dbd654ca9835a076ee9", "text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.11.018 ⇑ Corresponding authors at: Salford Business School M5 4WT, UK. Tel.: +44 0161 2954124; fax: +44 0161 2 010 62794461; fax: +86 010 62786911 (D.-H. Zhou). E-mail addresses: W.Wang@salford.ac.uk (W. Wan (D.-H. Zhou). Remaining useful life (RUL) is the useful life left on an asset at a particular time of operation. Its estimation is central to condition based maintenance and prognostics and health management. RUL is typically random and unknown, and as such it must be estimated from available sources of information such as the information obtained in condition and health monitoring. The research on how to best estimate the RUL has gained popularity recently due to the rapid advances in condition and health monitoring techniques. However, due to its complicated relationship with observable health information, there is no such best approach which can be used universally to achieve the best estimate. As such this paper reviews the recent modeling developments for estimating the RUL. The review is centred on statistical data driven approaches which rely only on available past observed data and statistical models. The approaches are classified into two broad types of models, that is, models that rely on directly observed state information of the asset, and those do not. We systematically review the models and approaches reported in the literature and finally highlight future research challenges. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c02a1c89692d88671f4be454345f3fa3", "text": "In this study, the resonant analysis and modeling of the microstrip-fed stepped-impedance (SI) slot antenna are presented by utilizing the transmission-line and lumped-element circuit topologies. This study analyzes the SI-slot antenna and systematically summarizes its frequency response characteristics, such as the resonance condition, spurious response, and equivalent circuit. Design formulas with respect to the impedance ratio of the SI slot antenna were analytically derived. The antenna designers can predict the resonant modes of the SI slot antenna without utilizing expensive EM-simulation software.", "title": "" }, { "docid": "701cad5b373f3dbc0497c23057c55c8f", "text": "In this paper, we focus on the problem of answer triggering addressed by Yang et al. (2015), which is a critical component for a real-world question answering system. We employ a hierarchical gated recurrent neural tensor (HGRNT) model to capture both the context information and the deep interactions between the candidate answers and the question. Our result on F value achieves 42.6%, which surpasses the baseline by over 10 %.", "title": "" }, { "docid": "293e2cd2647740bb65849fed003eb4ac", "text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.", "title": "" } ]
scidocsrr
b1faead2db0c000b0a4fcbb7325a5ad0
A Geometry-Appearance-Based Pupil Detection Method for Near-Infrared Head-Mounted Cameras
[ { "docid": "e946deae6e1d441c152dca6e52268258", "text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.", "title": "" } ]
[ { "docid": "b7a459e830d69f8360196641ddc2daec", "text": "Understanding software project risk can help in reducing the incidence of failure. Building on prior work, software project risk was conceptualized along six dimensions. A questionnaire was built and 507 software project managers were surveyed. A cluster analysis was then performed to identify aspects of low, medium, and high risk projects. An examination of risk dimensions across the levels revealed that even low risk projects have a high level of complexity risk. For high risk projects, the risks associated with requirements, planning and control, and the organization become more obvious. The influence of project scope, sourcing practices, and strategic orientation on project risk dimensions was also examined. Results suggested that project scope affects all dimensions of risk, whereas sourcing practices and strategic orientation had a more limited impact. A conceptual model of project risk and performance was presented. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a02294c9b732b0d58cb7b25faa5136c8", "text": "We consider the problem of spectrum trading with multiple licensed users (i.e., primary users) selling spectrum opportunities to multiple unlicensed users (i.e., secondary users). The secondary users can adapt the spectrum buying behavior (i.e., evolve) by observing the variations in price and quality of spectrum offered by the different primary users or primary service providers. The primary users or primary service providers can adjust their behavior in selling the spectrum opportunities to secondary users to achieve the highest utility. In this paper, we model the evolution and the dynamic behavior of secondary users using the theory of evolutionary game. An algorithm for the implementation of the evolution process of a secondary user is also presented. To model the competition among the primary users, a noncooperative game is formulated where the Nash equilibrium is considered as the solution (in terms of size of offered spectrum to the secondary users and spectrum price). For a primary user, an iterative algorithm for strategy adaptation to achieve the solution is presented. The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.", "title": "" }, { "docid": "60dd1689962a702e72660b33de1f2a17", "text": "A grammar formalism called GHRG based on CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. A CHRG executes as a robust bottom-up parser with an inherent treatment of ambiguity. The rules of a CHRG may refer to grammar symbols on either side of a sequence to be matched and this provides a powerful way to let parsing and attribute evaluation depend on linguistic context; examples show disambiguation of simple and ambiguous context-free rules and a handling of coordination in natural language. CHRGs may have rules to produce and consume arbitrary hypothesis and as an important application is shown an implementation of Assumption Grammars.", "title": "" }, { "docid": "c319111c7ed9e816ba8db253cf9a5bcd", "text": "Soft actuators made of highly elastic polymers allow novel robotic system designs, yet application-specific soft robotic systems are rarely reported. Taking notice of the characteristics of soft pneumatic actuators (SPAs) such as high customizability and low inherent stiffness, we report in this work the use of soft pneumatic actuators for a biomedical use - the development of a soft robot for rodents, aimed to provide a physical assistance during gait rehabilitation of a spinalized animal. The design requirements to perform this unconventional task are introduced. Customized soft actuators, soft joints and soft couplings for the robot are presented. Live animal experiment was performed to evaluate and show the potential of SPAs for their use in the current and future biomedical applications.", "title": "" }, { "docid": "fcbddff6b048bc93fd81e363d08adc6d", "text": "Question Answering (QA) system is the task where arbitrary question IS posed in the form of natural language statements and a brief and concise text returned as an answer. Contrary to search engines where a long list of relevant documents returned as a result of a query, QA system aims at providing the direct answer or passage containing the answer. We propose a general purpose question answering system which can answer wh-interrogated questions. This system is using Wikipedia data as its knowledge source. We have implemented major components of a QA system which include challenging tasks of Named Entity Tagging, Question Classification, Information Retrieval and Answer Extraction. Implementation of state-of-the-art Entity Tagging mechanism has helped identify entities where systems like OpenEphyra or DBpedia spotlight have failed. The information retrieval task includes development of a framework to extract tabular information known as Infobox from Wikipedia pages which has ensured availability of latest updated information. Answer Extraction module has implemented an attributes mapping mechanism which is helpful to extract answer from data. The system is comparable in results with other available general purpose QA systems.", "title": "" }, { "docid": "38ea50d7e6e5e1816005b3197828dbae", "text": "Life sciences research is based on individuals, often with diverse skills, assembled into research groups. These groups use their specialist expertise to address scientific problems. The in silico experiments undertaken by these research groups can be represented as workflows involving the co-ordinated use of analysis programs and information repositories that may be globally distributed. With regards to Grid computing, the requirements relate to the sharing of analysis and information resources rather than sharing computational power. The Grid project has developed the Taverna workbench for the composition and execution of workflows for the life sciences community. This experience paper describes lessons learnt during the development of Taverna. A common theme is the importance of understanding how workflows fit into the scientists’ experimental context. The lessons reflect an evolving understanding of life scientists’ requirements on a workflow environment, which is relevant to other areas of data intensive and exploratory science.", "title": "" }, { "docid": "f4bb9f769659436c79b67765145744ac", "text": "Sparse Principal Component Analysis (S-PCA) is a novel framework for learning a linear, orthonormal basis representation for structure intrinsic to an ensemble of images. S-PCA is based on the discovery that natural images exhibit structure in a low-dimensional subspace in a sparse, scale-dependent form. The S-PCA basis optimizes an objective function which trades off correlations among output coefficients for sparsity in the description of basis vector elements. This objective function is minimized by a simple, robust and highly scalable adaptation algorithm, consisting of successive planar rotations of pairs of basis vectors. The formulation of S-PCA is novel in that multi-scale representations emerge for a variety of ensembles including face images, images from outdoor scenes and a database of optical flow vectors representing a motion class.", "title": "" }, { "docid": "49333b20791e934ba2a4a6b5fc6382d9", "text": "Angiopoietins are ligands of the Tie2 receptor that control angiogenic remodeling in a context-dependent manner. Tie signaling is involved in multiple steps of the angiogenic remodeling process during development, including destabilization of existing vessels, endothelial cell migration, tube formation and the subsequent stabilization of newly formed tubes by mesenchymal cells. Beyond this critical role in blood vessel development, recent studies suggest a wider role for Tie2 and angiopoietins in lymphangiogenesis and the development of the hematopoietic system, as well as a possible role in the regulation of certain non-endothelial cells. The outcome of Tie signaling depends on which vascular bed is involved, and crosstalk between different VEGFs has an important modulating effect on the properties of the angiopoietins. Signaling through the Tie1 receptor is not well understood, but Tie1 may have both angiopoietin-dependent and ligand-independent functions. Changes in the expression of Tie receptors and angiopoietins occur in many pathological conditions, and mutations in the Tie2 gene are found in familial cases of vascular disease.", "title": "" }, { "docid": "228d7fa684e1caf43769fa13818b938f", "text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.", "title": "" }, { "docid": "490fe197e7ed6c658160c8a04ee1fc82", "text": "Automatic concept learning from large scale imbalanced data sets is a key issue in video semantic analysis and retrieval, which means the number of negative examples is far more than that of positive examples for each concept in the training data. The existing methods adopt generally under-sampling for the majority negative examples or over-sampling for the minority positive examples to balance the class distribution on training data. The main drawbacks of these methods are: (1) As a key factor that affects greatly the performance, in most existing methods, the degree of re-sampling needs to be pre-fixed, which is not generally the optimal choice; (2) Many useful negative samples may be discarded in under-sampling. In addition, some works only focus on the improvement of the computational speed, rather than the accuracy. To address the above issues, we propose a new approach and algorithm named AdaOUBoost (Adaptive Over-sampling and Under-sampling Boost). The novelty of AdaOUBoost mainly lies in: adaptively over-sample the minority positive examples and under-sample the majority negative examples to form different sub-classifiers. And combine these sub-classifiers according to their accuracy to create a strong classifier, which aims to use fully the whole training data and improve the performance of the class-imbalance learning classifier. In AdaOUBoost, first, our clustering-based under-sampling method is employed to divide the majority negative examples into some disjoint subsets. Then, for each subset of negative examples, we utilize the borderline-SMOTE (synthetic minority over-sampling technique) algorithm to over-sample the positive examples with different size, train each sub-classifier using each of them, and get the classifier by fusing these sub-classifiers with different weights. Finally, we combine these classifiers in each subset of negative examples to create a strong classifier. We compare the performance between AdaOUBoost and the state-of-the-art methods on TRECVID 2008 benchmark with all 20 concepts, and the results show the AdaOUBoost can achieve the superior performance in large scale imbalanced data sets.", "title": "" }, { "docid": "1a9d276c4571419e0d1b297f248d874d", "text": "Organizational culture plays a critical role in the acceptance and adoption of agile principles by a traditional software development organization (Chan & Thong, 2008). Organizations must understand the differences that exist between traditional software development principles and agile principles. Based on an analysis of the literature published between 2003 and 2010, this study examines nine distinct organizational cultural factors that require change, including management style, communication, development team practices, knowledge management, and customer interactions.", "title": "" }, { "docid": "d723903b45554c7a6c2fb4f32aa5dc48", "text": "Harvard architecture CPU design is common in the embedded world. Examples of Harvard-based architecture devices are the Mica family of wireless sensors. Mica motes have limited memory and can process only very small packets. Stack-based buffer overflow techniques that inject code into the stack and then execute it are therefore not applicable. It has been a common belief that code injection is impossible on Harvard architectures. This paper presents a remote code injection attack for Mica sensors. We show how to exploit program vulnerabilities to permanently inject any piece of code into the program memory of an Atmel AVR-based sensor. To our knowledge, this is the first result that presents a code injection technique for such devices. Previous work only succeeded in injecting data or performing transient attacks. Injecting permanent code is more powerful since the attacker can gain full control of the target sensor. We also show that this attack can be used to inject a worm that can propagate through the wireless sensor network and possibly create a sensor botnet. Our attack combines different techniques such as return oriented programming and fake stack injection. We present implementation details and suggest some counter-measures.", "title": "" }, { "docid": "94fc516df0c0a5f0ebaf671befe10982", "text": "In this paper, an 8th-order cavity filter with two symmetrical transmission zeros in stopband is designedwith the method of generalized Chebyshev synthesis so as to satisfy the IMT-Advanced system demands. To shorten the development cycle of the filter from two or three days to several hours, a co-simulation with Ansoft HFSS and Designer is presented. The effectiveness of the co-simulation method is validated by the excellent consistency between the simulation and the experiment results.", "title": "" }, { "docid": "5f3b787993ae1ebae34d8cee3ba1a975", "text": "Neisseria meningitidis remains an important cause of severe sepsis and meningitis worldwide. The bacterium is only found in human hosts, and so must continually coexist with the immune system. Consequently, N meningitidis uses multiple mechanisms to avoid being killed by antimicrobial proteins, phagocytes, and, crucially, the complement system. Much remains to be learnt about the strategies N meningitidis employs to evade aspects of immune killing, including mimicry of host molecules by bacterial structures such as capsule and lipopolysaccharide, which poses substantial problems for vaccine design. To date, available vaccines only protect individuals against subsets of meningococcal strains. However, two promising vaccines are currently being assessed in clinical trials and appear to offer good prospects for an effective means of protecting individuals against endemic serogroup B disease, which has proven to be a major challenge in vaccine research.", "title": "" }, { "docid": "693dd8eb0370259c4ee5f8553de58443", "text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.", "title": "" }, { "docid": "49bc648b7588e3d6d512a65688ce23aa", "text": "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned.", "title": "" }, { "docid": "9c00313926a8c625fd15da8708aa941e", "text": "OBJECTIVE\nThe objective of this study was to evaluate the effect of a dental water jet on plaque biofilm removal using scanning electron microscopy (SEM).\n\n\nMETHODOLOGY\nEight teeth with advanced aggressive periodontal disease were extracted. Ten thin slices were cut from four teeth. Two slices were used as the control. Eight were inoculated with saliva and incubated for 4 days. Four slices were treated using a standard jet tip, and four slices were treated using an orthodontic jet tip. The remaining four teeth were treated with the orthodontic jet tip but were not inoculated with saliva to grow new plaque biofilm. All experimental teeth were treated using a dental water jet for 3 seconds on medium pressure.\n\n\nRESULTS\nThe standard jet tip removed 99.99% of the salivary (ex vivo) biofilm, and the orthodontic jet tip removed 99.84% of the salivary biofilm. Observation of the remaining four teeth by the naked eye indicated that the orthodontic jet tip removed significant amounts of calcified (in vivo) plaque biofilm. This was confirmed by SEM evaluations.\n\n\nCONCLUSION\nThe Waterpik dental water jet (Water Pik, Inc, Fort Collins, CO) can remove both ex vivo and in vivo plaque biofilm significantly.", "title": "" }, { "docid": "f9dc4cfb42a5ec893f5819e03c64d4bc", "text": "For human pose estimation in monocular images, joint occlusions and overlapping upon human bodies often result in deviated pose predictions. Under these circumstances, biologically implausible pose predictions may be produced. In contrast, human vision is able to predict poses by exploiting geometric constraints of joint inter-connectivity. To address the problem by incorporating priors about the structure of human bodies, we propose a novel structure-aware convolutional network to implicitly take such priors into account during training of the deep network. Explicit learning of such constraints is typically challenging. Instead, we design discriminators to distinguish the real poses from the fake ones (such as biologically implausible ones). If the pose generator (G) generates results that the discriminator fails to distinguish from real ones, the network successfully learns the priors.,,To better capture the structure dependency of human body joints, the generator G is designed in a stacked multi-task manner to predict poses as well as occlusion heatmaps. Then, the pose and occlusion heatmaps are sent to the discriminators to predict the likelihood of the pose being real. Training of the network follows the strategy of conditional Generative Adversarial Networks (GANs). The effectiveness of the proposed network is evaluated on two widely used human pose estimation benchmark datasets. Our approach significantly outperforms the state-of-the-art methods and almost always generates plausible human pose predictions.", "title": "" }, { "docid": "b0bb9c4bcf666dca927d4f747bfb1ca1", "text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
ba4fd858ae6198a47a0ea3ce1f079232
Extracting semantics from audio-visual content: the final frontier in multimedia retrieval
[ { "docid": "4070072c5bd650d1ca0daf3015236b31", "text": "Automated classiication of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the eeciency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identiication of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in video, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion, and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classiier built using these features is able to identify sports clips with an accuracy of about 93%.", "title": "" }, { "docid": "662b1ec9e2481df760c19567ce635739", "text": "Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they", "title": "" } ]
[ { "docid": "09e740b38d0232361c89f47fce6155b4", "text": "Nano-emulsions consist of fine oil-in-water dispersions, having droplets covering the size range of 100-600 nm. In the present work, nano-emulsions were prepared using the spontaneous emulsification mechanism which occurs when an organic phase and an aqueous phase are mixed. The organic phase is an homogeneous solution of oil, lipophilic surfactant and water-miscible solvent, the aqueous phase consists on hydrophilic surfactant and water. An experimental study of nano-emulsion process optimisation based on the required size distribution was performed in relation with the type of oil, surfactant and the water-miscible solvent. The results showed that the composition of the initial organic phase was of great importance for the spontaneous emulsification process, and so, for the physico-chemical properties of the obtained emulsions. First, oil viscosity and HLB surfactants were changed, alpha-tocopherol, the most viscous oil, gave the smallest droplets size (171 +/- 2 nm), HLB required for the resulting oil-in-water emulsion was superior to 8. Second, the effect of water-solvent miscibility on the emulsification process was studied by decreasing acetone proportion in the organic phase. The solvent-acetone proportion leading to a fine nano-emulsion was fixed at 15/85% (v/v) with EtAc-acetone and 30/70% (v/v) with MEK-acetone mixture. To strength the choice of solvents, physical characteristics were compared, in particular, the auto-inflammation temperature and the flash point. This phase of emulsion optimisation represents an important step in the process of polymeric nanocapsules preparation using nanoprecipitation or interfacial polycondensation combined with spontaneous emulsification technique.", "title": "" }, { "docid": "a95761b5a67a07d02547c542ddc7e677", "text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; RL9J@virginia.edu. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.", "title": "" }, { "docid": "170a1dba20901d88d7dc3988647e8a22", "text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.", "title": "" }, { "docid": "0778eff54b2f48c9ed4554c617b2dcab", "text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.", "title": "" }, { "docid": "037dc2916e4356c11039e9520369ca3b", "text": "Surmounting terrain elevations, such as terraces, is useful to increase the reach of mobile robots operating in disaster areas, construction sites, and natural environments. This paper proposes an autonomous climbing maneuver for tracked mobile manipulators with the help of the onboard arm. The solution includes a fast 3-D scan processing method to estimate a simple set of geometric features for the ascent: three lines that correspond to the low and high edges, and the maximum inclination axis. Furthermore, terraces are classified depending on whether they are reachable through a slope or an abrupt step. In the proposed maneuver, the arm is employed both for shifting the center of gravity of the robot and as an extra limb that can be pushed against the ground. Feedback during climbing can be obtained through an inertial measurement unit, joint absolute encoders, and pressure sensors. Experimental results are presented for terraces of both kinds on rough terrain with the hydraulic mobile manipulator Alacrane.", "title": "" }, { "docid": "cfb1e7710233ca9a8e91885801326c20", "text": "During the last ten years technological development has reshaped the banking industry, which has become one of the leading sectors in utilizing new technology on consumer markets. Today, mobile communication technologies offer vast additional value for consumers’ banking transactions due to their always-on functionality and the option to access banks anytime and anywhere. Various alternative approaches have used in analyzing customer’s acceptance of new technologies. In this paper, factors affect acceptance of Mobile Banking are explored and presented as a New Model.", "title": "" }, { "docid": "a0c37bb6608f51f7095d6e5392f3c2f9", "text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589", "title": "" }, { "docid": "109c5caa55d785f9f186958f58746882", "text": "Apriori and Eclat are the best-known basic algorithms for mining frequent item sets in a set of transactions. In this paper I describe implementations of these two algorithms that use several optimizations to achieve maximum performance, w.r.t. both execution time and memory usage. The Apriori implementation is based on a prefix tree representation of the needed counters and uses a doubly recursive scheme to count the transactions. The Eclat implementation uses (sparse) bit matrices to represent transactions lists and to filter closed and maximal item sets.", "title": "" }, { "docid": "4f9b168efee2348f0f02f2480f9f449f", "text": "Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment. The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.", "title": "" }, { "docid": "619e3893a731ffd0ed78c9dd386a1dff", "text": "The introduction of new gesture interfaces has been expanding the possibilities of creating new Digital Musical Instruments (DMIs). Leap Motion Controller was recently launched promising fine-grained hand sensor capabilities. This paper proposes a preliminary study and evaluation of this new sensor for building new DMIs. Here, we list a series of gestures, recognized by the device, which could be theoretically used for playing a large number of musical instruments. Then, we present an analysis of precision and latency of these gestures as well as a first case study integrating Leap Motion with a virtual music keyboard.", "title": "" }, { "docid": "df0756ecff9f2ba84d6db342ee6574d3", "text": "Security is becoming a critical part of organizational information systems. Intrusion detection system (IDS) is an important detection that is used as a countermeasure to preserve data integrity and system availability from attacks. Data mining is being used to clean, classify, and examine large amount of network data to correlate common infringement for intrusion detection. The main reason for using data mining techniques for intrusion detection systems is due to the enormous volume of existing and newly appearing network data that require processing. The amount of data accumulated each day by a network is huge. Several data mining techniques such as clustering, classification, and association rules are proving to be useful for gathering different knowledge for intrusion detection. This paper presents the idea of applying data mining techniques to intrusion detection systems to maximize the effectiveness in identifying attacks, thereby helping the users to construct more secure information systems.", "title": "" }, { "docid": "058db5e1a8c58a9dc4b68f6f16847abc", "text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "05ab4fa15696ee8b47e017ebbbc83f2c", "text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.", "title": "" }, { "docid": "0ccbc8579a1d6e39c92f8a7acea979bd", "text": "In mental health, the term ‘recovery’ is commonly used to refer to the lived experience of the person coming to terms with, and overcoming the challenges associated with, having a mental illness (Shepherd et al 2008). The term ‘recovery’ has evolved as having a special meaning for mental health service users (Andresen et al 2003) and consistently refers to their personal experiences and expectations for recovery (Slade et al 2008). On the other hand, mental health service providers often refer to a ‘recovery’ framework in order to promote their service (Meehan et al 2008). However, practitioners lean towards a different meaning-in-use, which is better described as ‘clinical recovery’ and is measured routinely in terms of symptom profiles, health service utilisation, health outcomes and global assessments of functioning. These very different meanings-in-use of the same term have the potential to cause considerable confusion to readers of the mental health literature. Researchers have recently identified an urgent need to clarify the recovery concept so that a common meaning can be established and the construct can be defined operationally (Meehan et al 2008, Slade et al 2008). This paper aims to delineate a construct of recovery that can be applied operationally and consistently in mental health. The criteria were twofold: 1. The dimensions need to have a parsimonious and near mutually exclusive internal structure 2. All stakeholder perspectives and interests, including those of the wider community, need to be accommodated. With these criteria in mind, the literature was revisited to identify possible domains. It was subsequently identified that the recovery literature can be reclassified into components that accommodate the views of service users, practitioners, rehabilitation providers, family and carers, and the wider community. The recovery dimensions identified were clinical recovery, personal recovery, social recovery and functional recovery. Recovery as a concept has gained increased attention in the field of mental health. There is an expectation that service providers use a recovery framework in their work. This raises the question of what recovery means, and how it is conceptualised and operationalised. It is proposed that service providers approach the application of recovery principles by considering systematically individual recovery goals in multiple domains, encompassing clinical recovery, personal recovery, social recovery and functional recovery. This approach enables practitioners to focus on service users’ personal recovery goals while considering parallel goals in the clinical, social, and role-functioning domains. Practitioners can reconceptualise recovery as involving more than symptom remission, and interventions can be tailored to aspects of recovery of importance to service users. In order to accomplish this shift, practitioners will require effective assessments, access to optimal treatment and care, and the capacity to conduct recovery planning in collaboration with service users and their families and carers. Mental health managers can help by fostering an organisational culture of service provision that supports a broader focus than that on clinical recovery alone, extending to client-centred recovery planning in multiple recovery domains.", "title": "" }, { "docid": "16a6c26d6e185be8383c062c6aa620f8", "text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.", "title": "" }, { "docid": "74ce3b76d697d59df0c5d3f84719abb8", "text": "Existing Byzantine fault tolerance (BFT) protocols face significant challenges in the consortium blockchain scenario. On the one hand, we can make little assumptions about the reliability and security of the underlying Internet. On the other hand, the applications on consortium blockchains demand a system as scalable as the Bitcoin but providing much higher performance, as well as provable safety. We present a new BFT protocol, Gosig, that combines crypto-based secret leader selection and multi-round voting in the protocol layer with implementation layer optimizations such as gossip-based message propagation. In particular, Gosig guarantees safety even in a network fully controlled by adversaries, while providing provable liveness with easy-to-achieve network connectivity assumption. On a wide area testbed consisting of 140 Amazon EC2 servers spanning 14 cities on five continents, we show that Gosig can achieve over 4,000 transactions per second with less than 1 minute transaction confirmation time.", "title": "" }, { "docid": "9c3218ce94172fd534e2a70224ee564f", "text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.", "title": "" }, { "docid": "207bb3922ad45daa1023b70e1a18baf7", "text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.", "title": "" }, { "docid": "d80fc668073878c476bdf3997b108978", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" } ]
scidocsrr
99bb3c92cbbc43f00a1be095270da6a0
Design Challenges and Misconceptions in Neural Sequence Labeling
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" } ]
[ { "docid": "995a5523c131e09f8a8f04a3cf304045", "text": "Topic models are often applied in industrial settings to discover user profiles from activity logs where documents correspond to users and words to complex objects such as web sites and installed apps. Standard topic models ignore the content-based similarity structure between these objects largely because of the inability of the Dirichlet prior to capture such side information of word-word correlation. Several approaches were proposed to replace the Dirichlet prior with more expressive alternatives. However, this added expressivity comes with a heavy premium: inference becomes intractable and sparsity is lost which renders these alternatives not suitable for industrial scale applications. In this paper we take a radically different approach to incorporating word-word correlation in topic models by applying this side information at the posterior level rather than at the prior level. We show that this choice preserves sparsity and results in a graph-based sampler for LDA whose computational complexity is asymptotically on bar with the state of the art Alias base sampler for LDA \\cite{aliasLDA}. We illustrate the efficacy of our approach over real industrial datasets that span up to billion of users, tens of millions of words and thousands of topics. To the best of our knowledge, our approach provides the first practical and scalable solution to this important problem.", "title": "" }, { "docid": "1f714aea64a7d23743e507724e4d531b", "text": "At the mo ment, S upport Ve ctor Machine ( SVM) has been widely u sed i n t he study of stock investment related topics. Stock investment can be further divided into three s trategies such as: buy, sell and hold. Using data concerning China Steel Corporation, this article adopts genetic algorithm for the search of the best SVM parameter and the selection of the best SVM prediction variable, then it will be compared with Logistic Regression for the classification prediction capability of stock investment. From the classification prediction result and the result of AUC of the models presented in this article, it can be seen that the SVM after adjustment of input variables and parameters will have classification prediction capability relatively superior to that of the other three models.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" }, { "docid": "37845c0912d9f1b355746f41c7880c3a", "text": "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.", "title": "" }, { "docid": "4d2dad29f0f02d448c78b7beda529022", "text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.", "title": "" }, { "docid": "318904a334dfa03a6cb4720c31673dda", "text": "Choosing the most appropriate dietary assessment tool for a study can be a challenge. Through a scoping review, we characterized self-report tools used to assess diet in Canada to identify patterns in tool use and to inform strategies to strengthen nutrition research. The research databases Medline, PubMed, PsycINFO, and CINAHL were used to identify Canadian studies published from 2009 to 2014 that included a self-report assessment of dietary intake. The search elicited 2358 records that were screened to identify those that reported on self-report dietary intake among nonclinical, non-Aboriginal adult populations. A pool of 189 articles (reflecting 92 studies) was examined in-depth to assess the dietary assessment tools used. Food-frequency questionnaires (FFQs) and screeners were used in 64% of studies, whereas food records and 24-h recalls were used in 18% and 14% of studies, respectively. Three studies (3%) used a single question to assess diet, and for 3 studies the tool used was not clear. A variety of distinct FFQs and screeners, including those developed and/or adapted for use in Canada and those developed elsewhere, were used. Some tools were reported to have been evaluated previously in terms of validity or reliability, but details of psychometric testing were often lacking. Energy and fat were the most commonly studied, reported by 42% and 39% of studies, respectively. For ∼20% of studies, dietary data were used to assess dietary quality or patterns, whereas close to half assessed ≤5 dietary components. A variety of dietary assessment tools are used in Canadian research. Strategies to improve the application of current evidence on best practices in dietary assessment have the potential to support a stronger and more cohesive literature on diet and health. Such strategies could benefit from national and global collaboration.", "title": "" }, { "docid": "5f7ea9c7398ddbb5062d029e307fcf22", "text": "This paper presents a low cost and flexible home control and monitoring system using an embedded micro-web server, with IP connectivity for accessing and controlling devices and appliances remotely using Android based Smart phone app. The proposed system does not require a dedicated server PC with respect to similar systems and offers a novel communication protocol to monitor and control the home environment with more than just the switching functionality.", "title": "" }, { "docid": "5487ee527ef2a9f3afe7f689156e7e4d", "text": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.", "title": "" }, { "docid": "a9f6c0dfd884fb22e039b37e98f22fe0", "text": "Image semantic segmentation is a fundamental problem and plays an important role in computer vision and artificial intelligence. Recent deep neural networks have improved the accuracy of semantic segmentation significantly. Meanwhile, the number of network parameters and floating point operations have also increased notably. The realworld applications not only have high requirements on the segmentation accuracy, but also demand real-time processing. In this paper, we propose a pyramid pooling encoder-decoder network named PPEDNet for both better accuracy and faster processing speed. Our encoder network is based on VGG16 and discards the fully connected layers due to their huge amounts of parameters. To extract context feature efficiently, we design a pyramid pooling architecture. The decoder is a trainable convolutional network for upsampling the output of the encoder, and finetuning the segmentation details. Our method is evaluated on CamVid dataset, achieving 7.214% mIOU accuracy improvement while reducing 17.9% of the parameters compared with the state-of-the-art algorithm.", "title": "" }, { "docid": "be8864d6fb098c8a008bfeea02d4921a", "text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.", "title": "" }, { "docid": "14c278147defd19feb4e18d31a3fdcfb", "text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.", "title": "" }, { "docid": "ce8729f088aaf9f656c9206fc67ff4bd", "text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.", "title": "" }, { "docid": "331df0bd161470558dd5f5061d2b1743", "text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.", "title": "" }, { "docid": "4ac8435b96c020231c775c4625b5ff0a", "text": "This article addresses the issue of student writing in higher education. It draws on the findings of an Economic and Social Research Council funded project which examined the contrasting expectations and interpretations of academic staff and students regarding undergraduate students' written assignments. It is suggested that the implicit models that have generally been used to understand student writing do not adequately take account of the importance of issues of identity and the institutional relationships of power and authority that surround, and are embedded within, diverse student writing practices across the university. A contrasting and therefore complementary perspective is used to present debates about 'good' and `poor' student writing. The article outlines an 'academic literacies' framework which can take account of the conflicting and contested nature of writing practices, and may therefore be more valuable for understanding student writing in today's higher education than traditional models and approaches.", "title": "" }, { "docid": "2aa298d65ad723f7c89597165c563587", "text": "Recommender systems are needed to find food items of one’s interest. We review recommender systems and recommendation methods. We propose a food personalization framework based on adaptive hypermedia. We extend Hermes framework with food recommendation functionality. We combine TF-IDF term extraction method with cosine similarity measure. Healthy heuristics and standard food database are incorporated into the knowledgebase. Based on the performed evaluation, we conclude that semantic recommender systems in general outperform traditional recommenders systems with respect to accuracy, precision, and recall, and that the proposed recommender has a better F-measure than existing semantic recommenders.", "title": "" }, { "docid": "6c6e4e776a3860d1df1ccd7af7f587d5", "text": "We introduce new families of Integral Probability Metrics (IPM) for training Generative Adversarial Networks (GAN). Our IPMs are based on matching statistics of distributions embedded in a finite dimensional feature space. Mean and covariance feature matching IPMs allow for stable training of GANs, which we will call McGan. McGan minimizes a meaningful loss between distributions.", "title": "" }, { "docid": "e07198de4fe8ea55f2c04ba5b6e9423a", "text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.", "title": "" }, { "docid": "826e01210bb9ce8171ed72043b4a304d", "text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.", "title": "" }, { "docid": "06113aca54d87ade86127f2844df6bfd", "text": "A growing number of people use social networking sites to foster social relationships among each other. While the advantages of the provided services are obvious, drawbacks on a users' privacy and arising implications are often neglected. In this paper we introduce a novel attack called automated social engineering which illustrates how social networking sites can be used for social engineering. Our approach takes classical social engineering one step further by automating tasks which formerly were very time-intensive. In order to evaluate our proposed attack cycle and our prototypical implementation (ASE bot), we conducted two experiments. Within the first experiment we examine the information gathering capabilities of our bot. The second evaluation of our prototype performs a Turing test. The promising results of the evaluation highlight the possibility to efficiently and effectively perform social engineering attacks by applying automated social engineering bots.", "title": "" }, { "docid": "71c34b48cd22a0a8bc9b507e05919301", "text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: yzhou@nd.edu Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: tkijewsk@nd.edu Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: kareem@nd.edu. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:", "title": "" } ]
scidocsrr
8ea55164cabccfab554e3e6a0bc34ea0
Interactive virtual try-on clothing design systems
[ { "docid": "f3abf5a6c20b6fff4970e1e63c0e836b", "text": "We demonstrate a physically-based technique for predicting the drape of a wide variety of woven fabrics. The approach exploits a theoretical model that explicitly represents the microstructure of woven cloth with interacting particles, rather than utilizing a continuum approximation. By testing a cloth sample in a Kawabata fabric testing device, we obtain data that is used to tune the model's energy functions, so that it reproduces the draping behavior of the original material. Photographs, comparing the drape of actual cloth with visualizations of simulation results, show that we are able to reliably model the unique large-scale draping characteristics of distinctly different fabric types.", "title": "" } ]
[ { "docid": "623f303fd7fbcd88bfdb6f55855dce3c", "text": "Causation relations are a pervasive feature of human language. Despite this, the automatic acquisition of causal information in text has proved to be a difficult task in NLP. This paper provides a method for the automatic detection and extraction of causal relations. We also present an inductive learning approach to the automatic discovery of lexical and semantic constraints necessary in the disambiguation of causal relations that are then used in question answering. We devised a classification of causal questions and tested the procedure on a QA system.", "title": "" }, { "docid": "9b55e6dc69517848ae5e5040cd9d0d55", "text": "In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.", "title": "" }, { "docid": "65f4e93ac371d72b93c40f4fe9215805", "text": "Trie memory is a way of storing and retrieving information. ~ It is applicable to information that consists of function-argument (or item-term) pairs--information conventionally stored in unordered lists, ordered lists, or pigeonholes. The main advantages of trie memory over the other memoIw plans just mentioned are shorter access time, greater ease of addition or up-dating, greater convenience in handling arguments of diverse lengths, and the ability to take advantage of redundancies in the information stored. The main disadvantage is relative inefficiency in using storage space, but this inefficiency is not great when the store is large. In this paper several paradigms of trie memory are described and compared with other memory paradigms, their advantages and disadvantages are examined in detail, and applications are discussed. Many essential features of trie memory were mentioned by de la Briandais [1] in a paper presented to the Western Joint Computer Conference in 1959. The present development is essentially independent of his, having been described in memorandum form in January 1959 [2], and it is fuller in that it considers additional paradigms (finitedimensional trie memories) and includes experimental results bearing on the efficiency of utilization of storage space.", "title": "" }, { "docid": "7fc92ce3f51a0ad3e300474e23cf7401", "text": "Dependency parsers are critical components within many NLP systems. However, currently available dependency parsers each exhibit at least one of several weaknesses, including high running time, limited accuracy, vague dependency labels, and lack of nonprojectivity support. Furthermore, no commonly used parser provides additional shallow semantic interpretation, such as preposition sense disambiguation and noun compound interpretation. In this paper, we present a new dependency-tree conversion of the Penn Treebank along with its associated fine-grain dependency labels and a fast, accurate parser trained on it. We explain how a non-projective extension to shift-reduce parsing can be incorporated into non-directional easy-first parsing. The parser performs well when evaluated on the standard test section of the Penn Treebank, outperforming several popular open source dependency parsers; it is, to the best of our knowledge, the first dependency parser capable of parsing more than 75 sentences per second at over 93% accuracy.", "title": "" }, { "docid": "95fbf262f9e673bd646ad7e02c5cbd53", "text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.", "title": "" }, { "docid": "80541e2df85384fa15074d4178cfa4ae", "text": "For the first time, we demonstrate the possibility of realizing low-cost mm-Wave antennas using inkjet printing of silver nano-particles. It is widely spread that fabrication of mm-Wave antennas and microwave circuits using the typical (deposit/pattern/etch) scheme is a challenging and costly process, due to the strict limitations on permissible tolerances. Such fabrication technique becomes even more challenging when dealing with flexible substrate materials, such as liquid crystal polymers. On the other hand, inkjet printing of conductive inks managed to form an emerging fabrication technology that has gained lots of attention over the last few years. Such process allows the deposition of conductive particles directly at the desired location on a substrate of interest, without need for mask productions, alignments, or etching. This means the inkjet printing of conductive materials could present the future of environment-friendly low-cost rapid manufacturing of RF circuits and antennas.", "title": "" }, { "docid": "340cceb987594709e207de5bd14965e7", "text": "Objective: Neuromuscular injury prevention programs (IPP) can reduce injury rate by about 40% in youth sport. Multimodal IPP include, for instance, balance, strength, power, and agility exercises. Our systematic review and meta-analysis aimed to evaluate the effects of multimodal IPP on neuromuscular performance in youth sports. Methods: We conducted a systematic literature search including selected search terms related to youth sports, injury prevention, and neuromuscular performance. Inclusion criteria were: (i) the study was a (cluster-)randomized controlled trial (RCT), and (ii) investigated healthy participants, up to 20 years of age and involved in organized sport, (iii) an intervention arm performing a multimodal IPP was compared to a control arm following a common training regime, and (iv) neuromuscular performance parameters (e.g., balance, power, strength, sprint) were assessed. Furthermore, we evaluated IPP effects on sport-specific skills. Results: Fourteen RCTs (comprising 704 participants) were analyzed. Eight studies included only males, and five only females. Seventy-one percent of all studies investigated soccer players with basketball, field hockey, futsal, Gaelic football, and hurling being the remaining sports. The average age of the participants ranged from 10 years up to 19 years and the level of play from recreational to professional. Intervention durations ranged from 4 weeks to 4.5 months with a total of 12 to 57 training sessions. We observed a small overall effect in favor of IPP for balance/stability (Hedges' g = 0.37; 95%CI 0.17, 0.58), leg power (g = 0.22; 95%CI 0.07, 0.38), and isokinetic hamstring and quadriceps strength as well as hamstrings-to-quadriceps ratio (g = 0.38; 95%CI 0.21, 0.55). We found a large overall effect for sprint abilities (g = 0.80; 95%CI 0.50, 1.09) and sport-specific skills (g = 0.83; 95%CI 0.34, 1.32). Subgroup analyses revealed larger effects in high-level (g = 0.34-1.18) compared to low-level athletes (g = 0.22-0.75), in boys (g = 0.27-1.02) compared to girls (g = 0.09-0.38), in older (g = 0.32-1.16) compared to younger athletes (g = 0.18-0.51), and in studies with high (g = 0.35-1.16) compared to low (g = 0.12-0.38) overall number of training sessions. Conclusion: Multimodal IPP beneficially affect neuromuscular performance. These improvements may substantiate the preventative efficacy of IPP and may support the wide-spread implementation and dissemination of IPP. The study has been a priori registered in PROSPERO (CRD42016053407).", "title": "" }, { "docid": "d2ce4df3be70141a3ab55aa0750f19ca", "text": "Agile methods have become popular in recent years because the success rate of project development using Agile methods is better than structured design methods. Nevertheless, less than 50 percent of projects implemented using Agile methods are considered successful, and selecting the wrong Agile method is one of the reasons for project failure. Selecting the most appropriate Agile method is a challenging task because there are so many to choose from. In addition, potential adopters believe that migrating to an Agile method involves taking a drastic risk. Therefore, to assist project managers and other decision makers, this study aims to identify the key factors that should be considered when selecting an appropriate Agile method. A systematic literature review was performed to elicit these factors in an unbiased manner, and then content analysis was used to analyze the resultant data. It was found that the nature of the project, development team skills, project constraints, customer involvement and organizational culture are the key factors that should guide decision makers in the selection of an appropriate Agile method based on the value these factors have for different organizations and/or different projects. Keywords— Agile method selection; factors of selecting Agile methods; SLR", "title": "" }, { "docid": "debf183822616eabc57b95f5e6037d4f", "text": "A new algorithm is proposed which accelerates the mini-batch k-means algorithm of Sculley (2010) by using the distance bounding approach of Elkan (2003). We argue that, when incorporating distance bounds into a mini-batch algorithm, already used data should preferentially be reused. To this end we propose using nested mini-batches, whereby data in a mini-batch at iteration t is automatically reused at iteration t+ 1. Using nested mini-batches presents two difficulties. The first is that unbalanced use of data can bias estimates, which we resolve by ensuring that each data sample contributes exactly once to centroids. The second is in choosing mini-batch sizes, which we address by balancing premature fine-tuning of centroids with redundancy induced slow-down. Experiments show that the resulting nmbatch algorithm is very effective, often arriving within 1% of the empirical minimum 100× earlier than the standard mini-batch algorithm.", "title": "" }, { "docid": "3a29bbe76a53c8284123019eba7e0342", "text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.", "title": "" }, { "docid": "64306a76b61bbc754e124da7f61a4fbe", "text": "For over 50 years, electron beams have been an important modality for providing an accurate dose of radiation to superficial cancers and disease and for limiting the dose to underlying normal tissues and structures. This review looks at many of the important contributions of physics and dosimetry to the development and utilization of electron beam therapy, including electron treatment machines, dose specification and calibration, dose measurement, electron transport calculations, treatment and treatment-planning tools, and clinical utilization, including special procedures. Also, future changes in the practice of electron therapy resulting from challenges to its utilization and from potential future technology are discussed.", "title": "" }, { "docid": "f82135fc9034ce8308d3d1da156f65e3", "text": "Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.", "title": "" }, { "docid": "65118dccb8d5d9be4e21c46e7dde315c", "text": "In this paper, we will present a novel framework of utilizing periocular region for age invariant face recognition. To obtain age invariant features, we first perform preprocessing schemes, such as pose correction, illumination and periocular region normalization. And then we apply robust Walsh-Hadamard transform encoded local binary patterns (WLBP) on preprocessed periocular region only. We find the WLBP feature on periocular region maintains consistency of the same individual across ages. Finally, we use unsupervised discriminant projection (UDP) to build subspaces on WLBP featured periocular images and gain 100% rank-1 identification rate and 98% verification rate at 0.1% false accept rate on the entire FG-NET database. Compared to published results, our proposed approach yields the best recognition and identification results.", "title": "" }, { "docid": "ffa5ae359807884c2218b92d2db2a584", "text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.", "title": "" }, { "docid": "0c33a3eeaffb9afb76851a97d28cbdcc", "text": "We consider the cell-free massive multiple-input multiple-output (MIMO) downlink, where a very large number of distributed multiple-antenna access points (APs) serve many single-antenna users in the same time-frequency resource. A simple (distributed) conjugate beamforming scheme is applied at each AP via the use of local channel state information (CSI). This CSI is acquired through time-division duplex operation and the reception of uplink training signals transmitted by the users. We derive a closed-form expression for the spectral efficiency taking into account the effects of channel estimation errors and power control. This closed-form result enables us to analyze the effects of backhaul power consumption, the number of APs, and the number of antennas per AP on the total energy efficiency, as well as, to design an optimal power allocation algorithm. The optimal power allocation algorithm aims at maximizing the total energy efficiency, subject to a per-user spectral efficiency constraint and a per-AP power constraint. Compared with the equal power control, our proposed power allocation scheme can double the total energy efficiency. Furthermore, we propose AP selections schemes, in which each user chooses a subset of APs, to reduce the power consumption caused by the backhaul links. With our proposed AP selection schemes, the total energy efficiency increases significantly, especially for large numbers of APs. Moreover, under a requirement of good quality-of-service for all users, cell-free massive MIMO outperforms the colocated counterpart in terms of energy efficiency.", "title": "" }, { "docid": "37b1f275438471b89a226877a1783a6b", "text": "This paper presents the implementation of a wearable wireless sensor network aimed at monitoring harmful gases in industrial environments. The proposed solution is based on a customized wearable sensor node using a low-power low-rate wireless personal area network (LR-WPAN) communications protocol, which as a first approach measures CO₂ concentration, and employs different low power strategies for appropriate energy handling which is essential to achieving long battery life. These wearables nodes are connected to a deployed static network and a web-based application allows data storage, remote control and monitoring of the complete network. Therefore, a complete and versatile remote web application with a locally implemented decision-making system is accomplished, which allows early detection of hazardous situations for exposed workers.", "title": "" }, { "docid": "ef598ba4f9a4df1f42debc0eabd1ead8", "text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.", "title": "" }, { "docid": "34a7ae3283c4f3bcb3e9afff2383de72", "text": "Latent variable models have been a preferred choice in conversational modeling compared to sequence-to-sequence (seq2seq) models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global “topic” distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.", "title": "" }, { "docid": "17e1c5d4c8ff360cae2ec7ff2e8e7b4b", "text": "The mutual information term I(c;G(z, c)) requires the posterior P (c|G(z, c)), thus, it is hard to maximize directly. ST-GAN uses a technique called Variational Information Maximization Barber and Agakov (2003) by defining an auxiliary distribution Q(c|x) to approximate P (c|x) as InfoGAN Chen et al. (2016) does. The variational lower bound, LI(G,Q), of the local mutual information I(c;G(z, c)) is defined as:", "title": "" }, { "docid": "4d51e2a6f1ddfb15753117b0f22e0fad", "text": "We describe distributed algorithms for two widely-used topic models, namely the Latent Dirichlet Allocation (LDA) model, and the Hierarchical Dirichet Process (HDP) model. In our distributed algorithms the data is partitioned across separate processors and inference is done in a parallel, distributed fashion. We propose two distributed algorithms for LDA. The first algorithm is a straightforward mapping of LDA to a distributed processor setting. In this algorithm processors concurrently perform Gibbs sampling over local data followed by a global update of topic counts. The algorithm is simple to implement and can be viewed as an approximation to Gibbs-sampled LDA. The second version is a model that uses a hierarchical Bayesian extension of LDA to directly account for distributed data. This model has a theoretical guarantee of convergence but is more complex to implement than the first algorithm. Our distributed algorithm for HDP takes the straightforward mapping approach, and merges newly-created topics either by matching or by topic-id. Using five real-world text corpora we show that distributed learning works well in practice. For both LDA and HDP, we show that the converged test-data log probability for distributed learning is indistinguishable from that obtained with single-processor learning. Our extensive experimental results include learning topic models for two multi-million document collections using a 1024-processor parallel computer.", "title": "" } ]
scidocsrr
9700d880ea946726f8aa8a0afe0f63d8
Wearable Monitoring Unit for Swimming Performance Analysis
[ { "docid": "8717a6e3c20164981131997efbe08a0d", "text": "The recent maturity of body sensor networks has enabled a wide range of applications in sports, well-being and healthcare. In this paper, we hypothesise that a single unobtrusive head-worn inertial sensor can be used to infer certain biomotion details of specific swimming techniques. The sensor, weighing only seven grams is mounted on the swimmer's goggles, limiting the disturbance to a minimum. Features extracted from the recorded acceleration such as the pitch and roll angles allow to recognise the type of stroke, as well as basic biomotion indices. The system proposed represents a non-intrusive, practical deployment of wearable sensors for swimming performance monitoring.", "title": "" }, { "docid": "4122375a509bf06cc7e8b89cb30357ff", "text": "Textile-based sensors offer an unobtrusive method of continually monitoring physiological parameters during daily activities. Chemical analysis of body fluids, noninvasively, is a novel and exciting area of personalized wearable healthcare systems. BIOTEX was an EU-funded project that aimed to develop textile sensors to measure physiological parameters and the chemical composition of body fluids, with a particular interest in sweat. A wearable sensing system has been developed that integrates a textile-based fluid handling system for sample collection and transport with a number of sensors including sodium, conductivity, and pH sensors. Sensors for sweat rate, ECG, respiration, and blood oxygenation were also developed. For the first time, it has been possible to monitor a number of physiological parameters together with sweat composition in real time. This has been carried out via a network of wearable sensors distributed around the body of a subject user. This has huge implications for the field of sports and human performance and opens a whole new field of research in the clinical setting.", "title": "" } ]
[ { "docid": "0886c323b86b4fac8de6217583841318", "text": "Data Mining is a technique used in various domains to give meaning to the available data Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree, Bayesian networks, k-nearest neighbour classifier, Neural Network, Support vector machine. The goal of this paper is to provide a review of different classification techniques in data mining. Keywords— Data mining, classification, Supper vector machine (SVM), K-nearest neighbour (KNN), Decision Tree.", "title": "" }, { "docid": "c112b88b7a5762050a54a15d066336b0", "text": "Before 2005, data broker ChoicePoint suffered fraudulent access to its databases that exposed thousands of customers' personal information. We examine Choice-Point's data breach, explore what went wrong from the perspective of consumers, executives, policy, and IT systems, and offer recommendations for the future.", "title": "" }, { "docid": "2923ea4e17567b06b9d8e0e9f1650e55", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "24174e59a5550fbf733c1a93f1519cf7", "text": "Using social practice theory, this article reveals the process of collective value creation within brand communities. Moving beyond a single case study, the authors examine previously published research in conjunction with data collected in nine brand communities comprising a variety of product categories, and they identify a common set of value-creating practices. Practices have an “anatomy” consisting of (1) general procedural understandings and rules (explicit, discursive knowledge); (2) skills, abilities, and culturally appropriate consumption projects (tacit, embedded knowledge or how-to); and (3) emotional commitments expressed through actions and representations. The authors find that there are 12 common practices across brand communities, organized by four thematic aggregates, through which consumers realize value beyond that which the firm creates or anticipates. They also find that practices have a physiology, interact with one another, function like apprenticeships, endow participants with cultural capital, produce a repertoire for insider sharing, generate consumption opportunities, evince brand community vitality, and create value. Theoretical and managerial implications are offered with specific suggestions for building and nurturing brand community and enhancing collaborative value creation between and among consumers and firms.", "title": "" }, { "docid": "114affaf4e25819aafa1c11da26b931f", "text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.", "title": "" }, { "docid": "44a8b574a892bff722618d256aa4ba6c", "text": "In this article, we investigate the cross-media retrieval between images and text, that is, using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based on the 4,096-dimensional convolutional neural network (CNN) visual feature and 100-dimensional Latent Dirichlet Allocation (LDA) textual feature, the mAP of the proposed method achieves the mAP score of 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.", "title": "" }, { "docid": "8ea0ac6401d648e359fc06efa59658e6", "text": "Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency.", "title": "" }, { "docid": "474986186c068f8872f763288b0cabd7", "text": "Mobile ad hoc network researchers face the challenge of achieving full functionality with good performance while linking the new technology to the rest of the Internet. A strict layered design is not flexible enough to cope with the dynamics of manet environments, however, and will prevent performance optimizations. The MobileMan cross-layer architecture offers an alternative to the pure layered approach that promotes stricter local interaction among protocols in a manet node.", "title": "" }, { "docid": "c05f2a6df3d58c5a18e0087556c8067e", "text": "Child maltreatment is a major social problem. This paper focuses on measuring the relationship between child maltreatment and crime using data from the National Longitudinal Study of Adolescent Health (Add Health). We focus on crime because it is one of the most costly potential outcomes of maltreatment. Our work addresses two main limitations of the existing literature on child maltreatment. First, we use a large national sample, and investigate different types of maltreatment in a unified framework. Second, we pay careful attention to controlling for possible confounders using a variety of statistical methods that make differing assumptions. The results suggest that maltreatment greatly increases the probability of engaging in crime and that the probability increases with the experience of multiple forms of maltreatment.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "28552dfe20642145afa9f9fa00218e8e", "text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "c28dc261ddc770a6655eb1dbc528dd3b", "text": "Software applications are no longer stand-alone systems. They are increasingly the result of integrating heterogeneous collections of components, both executable and data, possibly dispersed over a computer network. Different components can be provided by different producers and they can be part of different systems at the same time. Moreover, components can change rapidly and independently, making it difficult to manage the whole system in a consistent way. Under these circumstances, a crucial step of the software life cycle is deployment—that is, the activities related to the release, installation, activation, deactivation, update, and removal of components, as well as whole systems. This paper presents a framework for characterizing technologies that are intended to support software deployment. The framework highlights four primary factors concerning the technologies: process coverage; process changeability; interprocess coordination; and site, product, and deployment policy abstraction. A variety of existing technologies are surveyed and assessed against the framework. Finally, we discuss promising research directions in software deployment. This work was supported in part by the Air Force Material Command, Rome Laboratory, and the Defense Advanced Research Projects Agency under Contract Number F30602-94-C-0253. The content of the information does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.", "title": "" }, { "docid": "ff002c483d22b4d961bbd2f1a18231fd", "text": "Dogs can be grouped into two distinct types of breed based on the predisposition to chondrodystrophy, namely, non-chondrodystrophic (NCD) and chondrodystrophic (CD). In addition to a different process of endochondral ossification, NCD and CD breeds have different characteristics of intravertebral disc (IVD) degeneration and IVD degenerative diseases. The anatomy, physiology, histopathology, and biochemical and biomechanical characteristics of the healthy and degenerated IVD are discussed in the first part of this two-part review. This second part describes the similarities and differences in the histopathological and biochemical characteristics of IVD degeneration in CD and NCD canine breeds and discusses relevant aetiological factors of IVD degeneration.", "title": "" }, { "docid": "58de521ab563333c2051b590592501a8", "text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.", "title": "" }, { "docid": "011a9ac960aecc4a91968198ac6ded97", "text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.", "title": "" }, { "docid": "a5090b67307b2efa1f8ae7d6a212a6ff", "text": "Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "78966bb154649f9f4abb87bd5f29b230", "text": "The objective of a news veracity detection system is to identify various types of potentially misleading or false information, typically in a digital platform. A critical challenge in this scenario is that there are large volumes of data available online. However, obtaining samples with annotations (i.e. ground-truth labels) is difficult and a known limiting factor for many data analytic tasks including the current problem of news veracity detection. In this paper, we propose a human-machine collaborative learning system to evaluate the veracity of a news content, with a limited amount of annotated data samples. In a semi-supervised scenario, an initial classifier is learnt on a small, limited amount of the annotated data followed by an interactive approach to gradually update the model by shortlisting only relevant samples from the large pool of unlabeled data that are most likely to improve the classifier performance. Our prioritized active learning solution achieves faster convergence in terms of the classification performance, while requiring about 1–2 orders of magnitude fewer annotated samples compared to fully supervised solutions to attain a reasonably acceptable accuracy of nearly 80%. Unlike traditional deep learning architecture, the proposed active learning based deep model designed with a smaller number of more localized filters per layer can efficiently learn from small relevant sample batches that can effectively improve performance in the weakly-supervised learning environment and thus is more suitable for several practical applications. An effective dynamic domain adaptive feature weighting scheme can adjust the relative importance of feature dimensions iteratively. Insightful initial feedback gathered from two independent learning modules (a NLP shallow feature based classifier and a deep classifier), modeled to capture complementary information about data characteristics are finally fused together to achieve an impressive 25% average gain in the detection performance.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
8c6ed91a636dc9882769d0faa93bf9b8
The Affordances of Business Analytics for Strategic Decision-Making and Their Impact on Organisational Performance
[ { "docid": "ba4121003eb56d3ab6aebe128c219ab7", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" } ]
[ { "docid": "879b58634bd71c8eee6c37350c196dc3", "text": "This paper presents a novel high-voltage gain boost converter topology based on the three-state commutation cell for battery charging using PV panels and a reduced number of conversion stages. The presented converter operates in zero-voltage switching (ZVS) mode for all switches. By using the new concept of single-stage approaches, the converter can generate a dc bus with a battery bank or a photovoltaic panel array, allowing the simultaneous charge of the batteries according to the radiation level. The operation principle, design specifications, and experimental results from a 500-W prototype are presented in order to validate the proposed structure.", "title": "" }, { "docid": "2ae773f548c1727a53a7eb43550d8063", "text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi", "title": "" }, { "docid": "e71402bed9c526d9152885ef86c30bb5", "text": "Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes. We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes. We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics. We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.", "title": "" }, { "docid": "9a6e7b49ddfa98520af1bb33bfb5fafa", "text": "Spell Description Schl Comp Time Range Target, Effect, Area Duration Save SR PHB £ Acid Fog Fog deals 2d6/rnd acid damage Conj V,S,M/DF 1 a Medium 20-ft radius 1 rnd/lvl-196 £ Acid Splash Acid Missile 1d3 damage Conj V,S 1 a Close Acid missile Instantaneous-196 £ Aid +1 att,+1 fear saves,1d8 +1/lvl hps Ench V,S,DF 1 a Touch One living creature 1 min/lvl-Yes 196 £ Air Walk Target treads on air as if solid Trans V,S,DF 1 a Touch One creature 10 min/lvl-Yes 196 £ Alarm Wards an area for 2 hr/lvl Abjur V,S,F/DF 1 a Close 20-ft radius 2 hr/lvl (D)-197 £ Align Weapon Adds alignment to weapon Trans V,S,DF 1 a Touch Weapon 1 min/lvl Will negs Yes 197 £ Alter Self Changes appearance Trans V,S 1 a Self Caster, +10 disguise 10 min/lvl (D)-197 £ Analyze Dweomer Reveals magical aspects of target Div V,S,F 1 a Close Item or creature/lvl 1 rnd/lvl (D) Will negs-197 £ Animal Growth Animal/2 lvls increases size category Trans V,S 1 a Medium 1 animal/2 lvls 1 min/lvl Fort negs Yes 198 £ Animal Messenger Send a tiny animal to specific place Ench V,S,M 1 a Close One tiny animal 1 day/lvl-Yes 198 £ Animal Shapes 1 ally/lvl polymorphs into animal Trans V,S,DF 1 a Close One creature/lvl 1 hr/lvl (D)-Yes 198 £ Animal Trance Fascinates 2d6 HD of animals Ench V,S 1 a Close Animals, Int 1 or 2 Conc Will negs Yes 198 £ Animate Dead Creates skeletons and zombies Necro V,S,M 1 a Touch Max 2HD/lvl Instantaneous-198 £ Animate Objects Items attack your foes Trans V,S 1 a Medium One small item/lvl 1 rnd/lvl-199 £ Animate Plants Animated plant Trans V 1 a Close 1 plant/3lvls 1 rnd/lvl-199 £ Animate Rope Rope moves at your command Trans V,S 1 a Medium 1 ropelike item 1 rnd/lvl-199 £ Antilife Shell 10-ft field excludes living creatures Abjur V,S,DF Round 10-ft 10-ft radius 10 min/lvl (D)-Yes 199 £ Antimagic Field Negates magic within 10-ft Abjur V,S,M/DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Sp 200 £ Antipathy Item or location repels creatures Ench V,S,M/DF 1 hr Close Location or item 2 hr/lvl (D) Will part Yes 200 £ Antiplant Shell Barrier protects against plants Abjur V,S,DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Yes 200 £ Arcane Eye Floating eye, moves 30ft/rnd Div V,S,M 10 min Unlimited Magical sensor 1 min/lvl (D)-200 …", "title": "" }, { "docid": "de6e139d0b5dc295769b5ddb9abcc4c6", "text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.", "title": "" }, { "docid": "1b60ded506c85edd798fe0759cce57fa", "text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.", "title": "" }, { "docid": "15cfa9005e68973cbca60f076180b535", "text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.", "title": "" }, { "docid": "9a73e9bc7c0dc343ad9dbe1f3dfe650c", "text": "The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "3405c4808237f8d348db27776d6e9b61", "text": "Pheochromocytomas are catecholamine-releasing tumors that can be found in an extraadrenal location in 10% of the cases. Almost half of all pheochromocytomas are now discovered incidentally during cross-sectional imaging for unrelated causes. We present a case of a paragaglioma of the organ of Zuckerkandl that was discovered incidentally during a magnetic resonance angiogram performed for intermittent claudication. Subsequent investigation with computed tompgraphy and I-123 metaiodobenzylguanine scintigraphy as well as an overview of the literature are also presented.", "title": "" }, { "docid": "fadbfcc98ad512dd788f6309d0a932af", "text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.", "title": "" }, { "docid": "4f3b91bfaa2304e78ad5cd305fb5d377", "text": "The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.", "title": "" }, { "docid": "cc3fbbff0a4d407df0736ef9d1be5dd0", "text": "The purpose of this study is to examine the effect of brand image benefits on satisfaction and loyalty intention in the context of color cosmetic product. Five brand image benefits consisting of functional, social, symbolic, experiential and appearance enhances were investigated. A survey carried out on 97 females showed that functional and appearance enhances significantly affect loyalty intention. Four of brand image benefits: functional, social, experiential, and appearance enhances are positively related to overall satisfaction. The results also indicated that overall satisfaction does influence customers' loyalty. The results imply that marketers should focus on brand image benefits in their effort to achieve customer loyalty.", "title": "" }, { "docid": "f07c06a198547aa576b9a6350493e6d4", "text": "In this paper we examine the diffusion of competing rumors in social networks. Two players select a disjoint subset of nodes as initiators of the rumor propagation, seeking to maximize the number of persuaded nodes. We use concepts of game theory and location theory and model the selection of starting nodes for the rumors as a strategic game. We show that computing the optimal strategy for both the first and the second player is NP-complete, even in a most restricted model. Moreover we prove that determining an approximate solution for the first player is NP-complete as well. We analyze several heuristics and show that—counter-intuitively—being the first to decide is not always an advantage, namely there exist networks where the second player can convince more nodes than the first, regardless of the first player’s decision.", "title": "" }, { "docid": "186145f38fd2b0e6ff41bb50cdeace13", "text": "Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, automatic sarcasm detection has witnessed great interest from the sentiment analysis community. This article is a compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and incorporation of context beyond target text. In this article, we describe datasets, approaches, trends, and issues in sarcasm detection. We also discuss representative performance values, describe shared tasks, and provide pointers to future work, as given in prior works. In terms of resources to understand the state-of-the-art, the survey presents several useful illustrations—most prominently, a table that summarizes past papers along different dimensions such as the types of features, annotation techniques, and datasets used.", "title": "" }, { "docid": "ee141b7fd5c372fb65d355fe75ad47af", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "ad56422f7dc5c9ebf8451e17565a79e8", "text": "Morphological changes of retinal vessels such as arteriovenous (AV) nicking are signs of many systemic diseases. In this paper, an automatic method for AV-nicking detection is proposed. The proposed method includes crossover point detection and AV-nicking identification. Vessel segmentation, vessel thinning, and feature point recognition are performed to detect crossover point. A method of vessel diameter measurement is proposed with processing of removing voids, hidden vessels and micro-vessels in segmentation. The AV-nicking is detected based on the features of vessel diameter measurement. The proposed algorithms have been tested using clinical images. The results show that nicking points in retinal images can be detected successfully in most cases.", "title": "" }, { "docid": "ac657141ed547f870ad35d8c8b2ba8f5", "text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.", "title": "" }, { "docid": "a0547eae9a2186d4c6f1b8307317f061", "text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
7e2af48fc319eecb15d2803c614fd278
Identifying confounders using additive noise models
[ { "docid": "a8dcddea10d4c5468618d233a4b2081e", "text": "Dimensionality reduction is an important task in machine learning, for it facilitates classification, compression, and visualization of high-dimensional data by mitigating undesired properties of high-dimensional spaces. Over the last decade, a large number of new (nonlinear) techniques for dimensionality reduction have been proposed. Most of these techniques are based on the intuition that data lies on or near a complex low-dimensional manifold that is embedded in the high-dimensional space. New techniques for dimensionality reduction aim at identifying and extracting the manifold from the high-dimensional space. A systematic empirical evaluation of a large number of dimensionality reduction techniques has been presented in [86]. This work has led to the development of the Matlab Toolbox for Dimensionality Reduction, which contains implementations of 27 techniques for dimensionality reduction. In addition, the toolbox contains implementation of 6 intrinsic dimensionality estimators and functions for out-of-sample extension, data generation, and data prewhitening. The report describes the techniques that are implemented in the toolbox in detail. Furthermore, it presents a number of examples that illustrate the functionality of the toolbox, and provide insight in the capabilities of state-of-the-art techniques for dimensionality reduction.", "title": "" }, { "docid": "959ba9c0929e36a8ef4a22a455ed947a", "text": "The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course, many causal relationships are more or less nonlinear, raising some doubts as to the applicability and usefulness of purely linear methods. In this contribution we show that the basic linear framework can be generalized to nonlinear models. In this extended framework, nonlinearities in the data-generating process are in fact a blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms to be identified. In addition to theoretical results we show simulations and some simple real data experiments illustrating the identification power provided by nonlinearities.", "title": "" } ]
[ { "docid": "6858c559b78c6f2b5000c22e2fef892b", "text": "Graph clustering is one of the key techniques for understanding the structures present in graphs. Besides cluster detection, identifying hubs and outliers is also a key task, since they have important roles to play in graph data mining. The structural clustering algorithm SCAN, proposed by Xu et al., is successfully used in many application because it not only detects densely connected nodes as clusters but also identifies sparsely connected nodes as hubs or outliers. However, it is difficult to apply SCAN to large-scale graphs due to its high time complexity. This is because it evaluates the density for all adjacent nodes included in the given graphs. In this paper, we propose a novel graph clustering algorithm named SCAN++. In order to reduce time complexity, we introduce new data structure of directly two-hop-away reachable node set (DTAR). DTAR is the set of two-hop-away nodes from a given node that are likely to be in the same cluster as the given node. SCAN++ employs two approaches for efficient clustering by using DTARs without sacrificing clustering quality. First, it reduces the number of the density evaluations by computing the density only for the adjacent nodes such as indicated by DTARs. Second, by sharing a part of the density evaluations for DTARs, it offers efficient density evaluations of adjacent nodes. As a result, SCAN++ detects exactly the same clusters, hubs, and outliers from large-scale graphs as SCAN with much shorter computation time. Extensive experiments on both real-world and synthetic graphs demonstrate the performance superiority of SCAN++ over existing approaches.", "title": "" }, { "docid": "280acc4e653512fabf7b181be57b31e2", "text": "BACKGROUND\nHealth care workers incur frequent injuries resulting from patient transfer and handling tasks. Few studies have evaluated the effectiveness of mechanical lifts in preventing injuries and time loss due to these injuries.\n\n\nMETHODS\nWe examined injury and lost workday rates before and after the introduction of mechanical lifts in acute care hospitals and long-term care (LTC) facilities, and surveyed workers regarding lift use.\n\n\nRESULTS\nThe post-intervention period showed decreased rates of musculoskeletal injuries (RR = 0.82, 95% CI: 0.68-1.00), lost workday injuries (RR = 0.56, 95% CI: 0.41-0.78), and total lost days due to injury (RR = 0.42). Larger reductions were seen in LTC facilities than in hospitals. Self-reported frequency of lift use by registered nurses and by nursing aides were higher in the LTC facilities than in acute care hospitals. Observed reductions in injury and lost day injury rates were greater on nursing units that reported greater use of the lifts.\n\n\nCONCLUSIONS\nImplementation of patient lifts can be effective in reducing occupational musculoskeletal injuries to nursing personnel in both LTC and acute care settings. Strategies to facilitate greater use of mechanical lifting devices should be explored, as further reductions in injuries may be possible with increased use.", "title": "" }, { "docid": "28f145c48cc50c61de6a764fdd357375", "text": "In this communication, a circularly polarized (CP) substrate-integrated waveguide horn antenna is proposed and studied. The CP horn antenna is implemented on a single-layer substrate with a thickness of $0.12\\lambda _{\\mathrm {\\mathbf {0}}}$ at the center frequency (1.524 mm) for 24 GHz system applications. It comprises of an integrated phase controlling and power dividing structure, two waveguide antennas, and an antipodal linearly tapered slot antenna. With such a phase controlling and power dividing structure fully integrated inside the horn antenna, two orthogonal electric fields of the equal amplitude with 90° phase difference are achieved at the aperture plane of the horn antenna, thus, yielding an even effective circular polarization in a compact single-layered geometry. The measured results of the prototyped horn antenna exhibit a 5% bandwidth (23.7–24.9 GHz) with an axial ratio below 3 dB and a VSWR below 2. The gain of the antenna is around 8.5 dBi.", "title": "" }, { "docid": "d84abd378e3756052ede68731d73ca45", "text": "A major difficulty in applying word vector embeddings in information retrieval is in devising an effective and efficient strategy for obtaining representations of compound units of text, such as whole documents, (in comparison to the atomic words), for the purpose of indexing and scoring documents. Instead of striving for a suitable method to obtain a single vector representation of a large document of text, we aim to develop a similarity metric that makes use of the similarities between the individual embedded word vectors in a document and a query. More specifically, we represent a document and a query as sets of word vectors, and use a standard notion of similarity measure between these sets, computed as a function of the similarities between each constituent word pair from these sets. We then make use of this similarity measure in combination with standard information retrieval based similarities for document ranking. The results of our initial experimental investigations show that our proposed method improves MAP by up to 5.77%, in comparison to standard text-based language model similarity, on the TREC 6, 7, 8 and Robust ad-hoc test collections.", "title": "" }, { "docid": "16fe3567780f3c3f2d8951b4db76f792", "text": "Despite the well documented and emerging insider threat to information systems, there is currently no substantial effort devoted to addressing the problem of internal IT misuse. In fact, the great majority of misuse countermeasures address forms of abuse originating from external factors (i.e. the perceived threat from unauthorized users). This paper suggests a new and innovative approach of dealing with insiders that abuse IT systems. The proposed solution estimates the level of threat that is likely to originate from a particular insider by introducing a threat evaluation system based on certain profiles of user behaviour. However, a substantial amount of work is required, in order to materialize and validate the proposed solutions.", "title": "" }, { "docid": "34557bc145ccd6d83edfc80da088f690", "text": "This thesis is dedicated to my mother, who taught me that success is not the key to happiness. Happiness is the key to success. If we love what we are doing, we will be successful. This thesis is dedicated to my father, who taught me that luck is not something that is given to us at random and should be waited for. Luck is the sense to recognize an opportunity and the ability to take advantage of it. iii ACKNOWLEDGEMENTS I would like to thank my thesis committee –", "title": "" }, { "docid": "f022f9fcc42ec2c919fcead0e8e0cf83", "text": "Object recognition and pose estimation is an important task in computer vision. A pose estimation algorithm using only depth information is proposed in this paper. Foreground and background points are distinguished based on their relative positions with boundaries. Model templates are selected using synthetic scenes to make up for the point pair feature algorithm. An accurate and fast pose verification method is introduced to select result poses from thousands of poses. Our algorithm is evaluated against a large number of scenes and proved to be more accurate than algorithms using both color information and depth information.", "title": "" }, { "docid": "e389aec1a2cbd7373452915703eddbc2", "text": "Information-centric networking (ICN) proposes to redesign the Internet by replacing its host centric design wit h an information centric one, by establishing communication at the naming level, with the receiver side acting as the driving force beh ind content delivery. Such design promises great advantages for the del ivery of content to and from mobile hosts. This, however, is at the exp ense of increased networking overhead, specifically in the case o f Nameddata Networking (NDN) due to use of flooding for path recovery. In this paper, we propose a mobility centric solution to address the overhead and scalability problems in NDN by introducing a novel forwarding architecture that leverages decentralized serverassisted routing over flooding based strategies. We present an indepth study of the proposed architecture and provide demons trative results on its throughput and overhead performance at different levels of mobility proving its scalability and effectiveness, when compared to the current NDN based forwarding strategies.", "title": "" }, { "docid": "7e10aa210d6985d757a21b8b6c49ae53", "text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t", "title": "" }, { "docid": "ec4b7c50f3277bb107961c9953fe3fc4", "text": "A blockchain is a linked-list of immutable tamper-proof blocks, which is stored at each participating node. Each block records a set of transactions and the associated metadata. Blockchain transactions act on the identical ledger data stored at each node. Blockchain was first perceived by Satoshi Nakamoto (Satoshi 2008), as a peer-to-peer money exchange system. Nakamoto referred to the transactional tokens exchanged among clients in his system, as Bitcoins. Overview", "title": "" }, { "docid": "15fc4abd2491b57c55c4ce339f41067e", "text": "A series of pyrazole analogues of natural piperine were synthesized by removing the basic piperidine moiety from the piperine nucleus. Piperine upon hydrolysis and oxidation, converted to piperonal and allowed to condense with substituted acetophenone gave chalcone derivative and cyclized finally with thiosemicarbazide to form pyrazole derivatives of piperine. Docking studies were carried out against different targets like Cyclooxygenase, farnasyl transferase receptors. Majority of the synthesized chemical compounds showed good fit with the active site of all the docked targets.Compound 6a have shown significant anti inflammatory activity and 6d and 6c have shown significant anticancer activity when compared with standard drugs.", "title": "" }, { "docid": "46410be2730753051c4cb919032fad6f", "text": "categories. That is, since cue validity is the probability of being in some category given some property, this probability will increase (or at worst not decrease) as the size of the category increases (e.g. the probability of being an animal given the property of flying is greater than the probability of bird given flying, since there must be more animals that fly than birds that fly).6 The idea that cohesive categories maximize the probability of particular properties given the category fares no better. In this case, the most specific categories will always be picked out. Medin (1982) has analyzed a variety of formal measures of category cohe­ siveness and pointed out problems with all of them. For example, one possible principle is to have concepts such that they minimize the similarity between contrasting categories; but minimizing between-category similarity will always lead one to sort a set of n objects into exactly two categories. Similarly, functions based on maximizing within-category similarity while minimizing between-category similarity lead to a variety of problems and counterintuitive expectations about when to accept new members into existent categories versus when to set up new categories. At a less formal but still abstract level, Sternberg (1982) has tried to translate some of Goodman's (e.g. 1983) ideas about induction into possible constraints on natural concepts. Sternberg suggests that the apparent naturalness of a concept increases with the familiarity of the concept (where familiarity is related to Goodman's notion of entrenchment), and decreases with the number of transformations specified in the concept (e.g. aging specifies certain trans­", "title": "" }, { "docid": "31996310254c69e62f4971db09499485", "text": "This paper studies P2P lending and the factors explaining loan default. This is an important issue because in P2P lending individual investors bear the credit risk, instead of financial institutions, which are experts in dealing with this risk. P2P lenders suffer a severe problem of information asymmetry, because they are at a disadvantage facing the borrower. For this reason, P2P lending sites provide potential lenders with information about borrowers and their loan purpose. They also assign a grade to each loan. The empirical study is based on loans' data collected from Lending Club (N = 24,449) from 2008 to 2014 that are first analyzed by using univariate means tests and survival analysis. Factors explaining default are loan purpose, annual income, current housing situation, credit history and indebtedness. Secondly, a logistic regression model is developed to predict defaults. The grade assigned by the P2P lending site is the most predictive factor of default, but the accuracy of the model is improved by adding other information, especially the borrower's debt level.", "title": "" }, { "docid": "0aabb07ef22ef59d6573172743c6378b", "text": "Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.", "title": "" }, { "docid": "1b05959625fb8b733e9b9ecf3dcef22e", "text": "Relational agents—computational artifacts designed to build and maintain longterm social-emotional relationships with users—may provide an effective interface modality for older adults. This is especially true when the agents use simulated face-toface conversation as the primary communication medium, and for applications in which repeated interactions over long time periods are required, such as in health behavior change. In this article we discuss the design of a relational agent for older adults that plays the role of an exercise advisor, and report on the results of a longitudinal study involving 21 adults aged 62 to 84, half of whom interacted with the agent daily for two months in their homes and half who served as a standard-of-care control. Results indicate the agent was accepted and liked, and was significantly more efficacious at increasing physical activity (daily steps walked) than the control.", "title": "" }, { "docid": "0d9340dc849332af5854380fa460cfd5", "text": "Many scientific datasets archive a large number of variables over time. These timeseries data streams typically track many variables over relatively long periods of time, and therefore are often both wide and deep. In this paper, we describe the Visual Query Language (VQL) [3], a technology for locating time series patterns in historical or real time data. The user interactively specifies a search pattern, VQL finds similar shapes, and returns a ranked list of matches. VQL supports both univariate and multivariate queries, and allows the user to interactively specify the the quality of the match, including temporal warping, amplitude warping, and temporal constraints between features.", "title": "" }, { "docid": "2e02a16fa9c40bfb7e498bef8927e5ff", "text": "There exist two broad approaches to information retrieval (IR) in the legal domain: those based on manual knowledge engineering (KE) and those based on natural language processing (NLP). The KE approach is grounded in artificial intelligence (AI) and case-based reasoning (CBR), whilst the NLP approach is associated with open domain statistical retrieval. We provide some original arguments regarding the focus on KE-based retrieval in the past and why this is not sustainable in the long term. Legal approaches to questioning (NLP), rather than arguing (CBR), are proposed as the appropriate jurisprudential and cognitive underpinning for legal IR. Recall within the context of precision is proposed as a better fit to law than the ‘total recall’ model of the past, wherein conceptual and contextual search are combined to improve retrieval performance for both parties in a dispute.", "title": "" }, { "docid": "f0057666e16f7a0a05b4890d48fdbf42", "text": "BACKGROUND\nThe aim of this review was to systematically assess and meta-analyze the effects of yoga on modifiable biological cardiovascular disease risk factors in the general population and in high-risk disease groups.\n\n\nMETHODS\nMEDLINE/PubMed, Scopus, the Cochrane Library, and IndMED were screened through August 2013 for randomized controlled trials (RCTs) on yoga for predefined cardiovascular risk factors in healthy participants, non-diabetic participants with high risk for cardiovascular disease, or participants with type 2 diabetes mellitus. Risk of bias was assessed using the Cochrane risk of bias tool.\n\n\nRESULTS\nForty-four RCTs with a total of 3168 participants were included. Risk of bias was high or unclear for most RCTs. Relative to usual care or no intervention, yoga improved systolic (mean difference (MD)=-5.85 mm Hg; 95% confidence interval (CI)=-8.81, -2.89) and diastolic blood pressure (MD=-4.12 mm Hg; 95%CI=-6.55, -1.69), heart rate (MD=-6.59 bpm; 95%CI=-12.89, -0.28), respiratory rate (MD=-0.93 breaths/min; 95%CI=-1.70, -0.15), waist circumference (MD=-1.95 cm; 95%CI=-3.01, -0.89), waist/hip ratio (MD=-0.02; 95%CI=-0.03, -0.00), total cholesterol (MD=-13.09 mg/dl; 95%CI=-19.60, -6.59), HDL (MD=2.94 mg/dl; 95%CI=0.57, 5.31), VLDL (MD=-5.70 mg/dl; 95%CI=-7.36, -4.03), triglycerides (MD=-20.97 mg/dl; 95%CI=-28.61, -13.32), HbA1c (MD=-0.45%; 95%CI=-0.87, -0.02), and insulin resistance (MD=-0.19; 95%CI=-0.30, -0.08). Relative to exercise, yoga improved HDL (MD=3.70 mg/dl; 95%CI=1.14, 6.26).\n\n\nCONCLUSIONS\nThis meta-analysis revealed evidence for clinically important effects of yoga on most biological cardiovascular disease risk factors. Despite methodological drawbacks of the included studies, yoga can be considered as an ancillary intervention for the general population and for patients with increased risk of cardiovascular disease.", "title": "" }, { "docid": "662c6a0e2d9a10a9e1fd1046e827adc0", "text": "Counterfactuals are mental representations of alternatives to the past and produce consequences that are both beneficial and aversive to the individual. These apparently contradictory effects are integrated in a functionalist model of counterfactual thinking. The author reviews research in support of the assertions that (a) counterfactual thinking is activated automatically in response to negative affect, (b) the content of counterfactuals targets particularly likely causes of misfortune, (c) counterfactuals produce negative affective consequences through a contrast-effect mechanism and positive inferential consequences through a causal-inference mechanism, and (d) the net effect of counterfactual thinking is beneficial.", "title": "" }, { "docid": "c12cd99e8f1184fb77c7027c71a8dace", "text": "This paper reports on a wearable gesture-based controller fabricated using the sensing capabilities of the flexible thin-film piezoelectric polymer polyvinylidene fluoride (PVDF) which is shown to repeatedly and accurately discern, in real time, between right and left hand gestures. The PVDF is affixed to a compression sleeve worn on the forearm to create a wearable device that is flexible, adaptable, and highly shape conforming. Forearm muscle movements, which drive hand motions, are detected by the PVDF which outputs its voltage signal to a developed microcontroller-based board and processed by an artificial neural network that was trained to recognize the generated voltage profile of right and left hand gestures. The PVDF has been spatially shaded (etched) in such a way as to increase sensitivity to expected deformations caused by the specific muscles employed in making the targeted right and left gestures. The device proves to be exceptionally accurate both when positioned as intended and when rotated and translated on the forearm.", "title": "" } ]
scidocsrr
bd36cc3e4df180aaa44a286cb9ae0459
Learning task-specific models for dexterous, in-hand manipulation with simple, adaptive robot hands
[ { "docid": "a76826da7f077cf41aaa7c8eca9be3fe", "text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.", "title": "" }, { "docid": "9e88b710d55b90074a98ba70527e0cea", "text": "In this paper we present a series of design directions for the development of affordable, modular, light-weight, intrinsically-compliant, underactuated robot hands, that can be easily reproduced using off-the-shelf materials. The proposed robot hands, efficiently grasp a series of everyday life objects and are considered to be general purpose, as they can be used for various applications. The efficiency of the proposed robot hands has been experimentally validated through a series of experimental paradigms, involving: grasping of multiple everyday life objects with different geometries, myoelectric (EMG) control of the robot hands in grasping tasks, preliminary results on a grasping capable quadrotor and autonomous grasp planning under object position and shape uncertainties.", "title": "" } ]
[ { "docid": "8933d7d0f57a532ef27b9dbbb3727a88", "text": "All people can not do as they plan, it happens because of their habits. Therefore, habits and moods may affect their productivity. Hence, the habits and moods are the important parts of person's life. Such habits may be analyzed with various machine learning techniques as available nowadays. Now the question of analyzing the Habits and moods of a person with a goal of increasing one's productivity comes to mind. This paper discusses one such technique called HDML (Habit Detection with Machine Learning). HDML model analyses the mood which helps us to deal with a bad mood or a state of unproductivity, through suggestions about such activities that alleviate our mood. The overall accuracy of the model is about 87.5 %.", "title": "" }, { "docid": "643599f9b0dcfd270f9f3c55567ed985", "text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "532980d1216f9f10332cc13b6a093fb4", "text": "Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words. Current DSMs, however, represent context words as separate features, which causes the loss of important information for word expectations, such as word order and interrelations. In this paper, we present a DSM which addresses the issue by defining verb contexts as joint dependencies. We test our representation in a verb similarity task on two datasets, showing that joint contexts are more efficient than single dependencies, even with a relatively small amount of training data.", "title": "" }, { "docid": "cc99e806503b158aa8a41753adecd50c", "text": "Semantic Mutation Testing (SMT) is a technique that aims to capture errors caused by possible misunderstandings of the semantics of a description language. It is intended to target a class of errors which is different from those captured by traditional Mutation Testing (MT). This paper describes our experiences in the development of an SMT tool for the C programming language: SMT-C. In addition to implementing the essential requirements of SMT (generating semantic mutants and running SMT analysis) we also aimed to achieve the following goals: weak MT/SMT for C, good portability between different configurations, seamless integration into test routines of programming with C and an easy to use front-end.", "title": "" }, { "docid": "643358b55155cab539188423c2b92713", "text": "Recently, DevOps has emerged as an alternative for software organizations inserted into a dynamic market to handle daily software demands. As claimed, it intends to make the software development and operations teams to work collaboratively. However, it is hard to observe a shared understanding of DevOps, what potentially hinders the discussions in the literature and can confound observations when conducting empirical studies. Therefore, we performed a Multivocal Literature Review aiming at characterizing DevOps in multiple perspectives, including data sources from technical and gray literature. Grounded Theory procedures were used to rigorous analyze the collected data. It allowed us to achieve a grounded definition for DevOps, as well as to identify its recurrent principles, practices, required skills, potential benefits, challenges and what motivates the organizations to adopt it. Finally, we understand the DevOps movement has identified relevant issues in the state-of-the-practice. However, we advocate for the scientific investigations concerning the potential benefits and drawbacks as a consequence of adopting the suggested principles and practices.", "title": "" }, { "docid": "0a09f894029a0b8730918c14906dca9e", "text": "In the last few years, machine learning has become a very popular tool for analyzing financial text data, with many promising results in stock price forecasting from financial news, a development with implications for the E cient Markets Hypothesis (EMH) that underpins much economic theory. In this work, we explore recurrent neural networks with character-level language model pre-training for both intraday and interday stock market forecasting. In terms of predicting directional changes in the Standard & Poor’s 500 index, both for individual companies and the overall index, we show that this technique is competitive with other state-of-the-art approaches.", "title": "" }, { "docid": "13ab6462ca59ca8618174aa00c15ba58", "text": "In Brazil, around 2 000 000 families have not been connected to an electricity grid yet. Out of these, a significant number of villages may never be connected to the national grid due to their remoteness. For the people living in these communities, access to renewable energy sources is the only solution to meet their energy needs. In these communes, the electricity is mainly used for household purposes such as lighting. There is little scope for the productive use of energy. It is recognized that electric service contributes particularly to inclusive social development and to a lesser extent to pro-poor growth as well as to environmental sustainability. In this paper, we present the specification, design, and development of a standalone micro-grid supplied by a hybrid wind-solar generating source. The goal of the project was to provide a reliable, continuous, sustainable, and good-quality electricity service to users, as provided in bigger cities. As a consequence, several technical challenges arose and were overcome successfully as will be related in this paper, contributing to increase of confidence in renewable systems to isolated applications.", "title": "" }, { "docid": "bd32bda2e79d28122f424ec4966cde15", "text": "This paper holds a survey on plant leaf diseases classification using image processing. Digital image processing has three basic steps: image processing, analysis and understanding. Image processing contains the preprocessing of the plant leaf as segmentation, color extraction, diseases specific data extraction and filtration of images. Image analysis generally deals with the classification of diseases. Plant leaf can be classified based on their morphological features with the help of various classification techniques such as PCA, SVM, and Neural Network. These classifications can be defined various properties of the plant leaf such as color, intensity, dimensions. Back propagation is most commonly used neural network. It has many learning, training, transfer functions which is used to construct various BP networks. Characteristics features are the performance parameter for image recognition. BP networks shows very good results in classification of the grapes leaf diseases. This paper provides an overview on different image processing techniques along with BP Networks used in leaf disease classification.", "title": "" }, { "docid": "a57caf61fdae1ab9c1fc4d944ebe03cd", "text": "The handiness and ease of use of tele-technology like mobile phones has surged the growth of ICT in developing countries like India than ever. Mobile phones are showing overwhelming responses and have helped farmers to do the work on timely basis and stay connected with the outer farming world. But mobile phones are of no use when it comes to the real-time farm monitoring or accessing the accurate information because of the little research and application of mobile phone in agricultural field for such uses. The current demand of use of WSN in agricultural fields has revolutionized the farming experiences. In Precision Agriculture, the contribution of WSN are numerous staring from monitoring soil health, plant health to the storage of crop yield. Due to pressure of population and economic inflation, a lot of pressure is on farmers to produce more out of their fields with fewer resources. This paper gives brief insight into the relation of plant disease prediction with the help of wireless sensor networks. Keywords— Plant Disease Monitoring, Precision Agriculture, Environmental Parameters, Wireless Sensor Network (WSN)", "title": "" }, { "docid": "83f1fc22d029b3a424afcda770a5af23", "text": "Three species of Xerolycosa: Xerolycosa nemoralis (Westring, 1861), Xerolycosa miniata (C.L. Koch, 1834) and Xerolycosa mongolica (Schenkel, 1963), occurring in the Palaearctic Region are surveyed, illustrated and redescribed. Arctosa mongolica Schenkel, 1963 is removed from synonymy with Xerolycosa nemoralis and transferred to Xerolycosa, and the new combination Xerolycosa mongolica (Schenkel, 1963) comb. n. is established. One new synonymy, Xerolycosa undulata Chen, Song et Kim, 1998 syn.n. from Heilongjiang = Xerolycosa mongolica (Schenkel, 1963), is proposed. In addition, one more new combination is established, Trochosa pelengena (Roewer, 1960) comb. n., ex Xerolycosa.", "title": "" }, { "docid": "22cc9e5487975f8b7ca400ad69504107", "text": "IMSI Catchers are tracking devices that break the privacy of the subscribers of mobile access networks, with disruptive effects to both the communication services and the trust and credibility of mobile network operators. Recently, we verified that IMSI Catcher attacks are really practical for the state-of-the-art 4G/LTE mobile systems too. Our IMSI Catcher device acquires subscription identities (IMSIs) within an area or location within a few seconds of operation and then denies access of subscribers to the commercial network. Moreover, we demonstrate that these attack devices can be easily built and operated using readily available tools and equipment, and without any programming. We describe our experiments and procedures that are based on commercially available hardware and unmodified open source software.", "title": "" }, { "docid": "fa065201fb8c95487eb6a55942befc41", "text": "Numerous machine learning algorithms applied on Intrusion Detection System (IDS) to detect enormous attacks. However, it is difficult for machine to learn attack properties globally since there are huge and complex input features. Feature selection can overcome this problem by selecting the most important features only to reduce the dimensionality of input features. We leverage Artificial Neural Network (ANN) for the feature selection. In addition, in order to be suitable for resource-constrained devices, we can divide the IDS into smaller parts based on TCP/IP layer since different layer has specific attack types. We show the IDS for transport layer only as a prove of concept. We apply Stacked Auto Encoder (SAE) which belongs to deep learning algorithm as a classifier for KDD99 Dataset. Our experiment shows that the reduced input features are sufficient for classification task. 한국정보보호학회 하계학술대회 논문집 Vol. 26, No. 1", "title": "" }, { "docid": "6a1a62a5c586f0abd08a94a19371004f", "text": "Tourism is perceived as an appropriate solution for pursuing sustainable economic growth due to its main characteristics. In the context of sustainable tourism, gamification can act as an interface between tourists (clients), organisations (companies, NGOs, public institutions) and community, an interface built in a responsible and ethical way. The main objective of this study is to identify gamification techniques and applications used by organisations in the hospitality and tourism industry to improve their sustainable activities. The first part of the paper examines the relationship between gamification and sustainability, highlighting the links between these two concepts. The second part identifies success stories of gamification applied in hospitality and tourism and reviews gamification benefits by analysing the relationship between tourism organisations and three main tourism stakeholders: tourists, tourism employees and local community. The analysis is made in connection with the main pillars of sustainability: economic, social and environmental. This study is positioning the role of gamification in the tourism and hospitality industry and further, into the larger context of sustainable development.", "title": "" }, { "docid": "6465b2af36350a444fbc6682540ff21d", "text": "We present an algorithm for finding an <i>s</i>-sparse vector <i>x</i> that minimizes the <i>square-error</i> ∥<i>y</i> -- Φ<i>x</i>∥<sup>2</sup> where Φ satisfies the <i>restricted isometry property</i> (RIP), with <i>isometric constant</i> Δ<sub>2<i>s</i></sub> < 1/3. Our algorithm, called <b>GraDeS</b> (Gradient Descent with Sparsification) iteratively updates <i>x</i> as: [EQUATION]\n where γ > 1 and <i>H<sub>s</sub></i> sets all but <i>s</i> largest magnitude coordinates to zero. <b>GraDeS</b> converges to the correct solution in constant number of iterations. The condition Δ<sub>2<i>s</i></sub> < 1/3 is most general for which a <i>near-linear time</i> algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is Δ<sub>2<i>s</i></sub> < √2 -- 1.\n Our Matlab implementation of <b>GraDeS</b> outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but <b>GraDeS</b> finds the correct solution.", "title": "" }, { "docid": "440e45de4d13e89e3f268efa58f8a51a", "text": "This letter describes the concept, design, and measurement of a low-profile integrated microstrip antenna for dual-band applications. The antenna operates at both the GPS L1 frequency of 1.575 GHz with circular polarization and 5.88 GHz with a vertical linear polarization for dedicated short-range communication (DSRC) application. The antenna is low profile and meets stringent requirements on pattern/polarization performance in both bands. The design procedure is discussed, and full measured data are presented.", "title": "" }, { "docid": "50c639dfa7063d77cda26666eabeb969", "text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments", "title": "" }, { "docid": "3ecd1c083d256c7fd88991f1e442cb8b", "text": "It has long been observed that database management systems focus on traditional business applications, and that few people use a database management system outside their workplace. Many have wondered what it will take to enable the use of data management technology by a broader class of users and for a much wider range of applications.\n Google Fusion Tables represents an initial answer to the question of how data management functionality that focused on enabling new users and applications would look in today's computing environment. This paper characterizes such users and applications and highlights the resulting principles, such as seamless Web integration, emphasis on ease of use, and incentives for data sharing, that underlie the design of Fusion Tables. We describe key novel features, such as the support for data acquisition, collaboration, visualization, and web-publishing.", "title": "" }, { "docid": "3c82ba94aa4d717d51c99cfceb527f22", "text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.", "title": "" }, { "docid": "61c4146ac8b55167746d3f2b9c8b64e8", "text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.", "title": "" } ]
scidocsrr
ef3c20dc9ab787e25e77ba60675f2ca6
A Memetic Fingerprint Matching Algorithm
[ { "docid": "0e2d6ebfade09beb448e9c538dadd015", "text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "b21c6ab3b97fd23f8fe1f8645608b29f", "text": "Daily activity recognition can help people to maintain a healthy lifestyle and robot to better interact with users. Robots could therefore use the information coming from the activities performed by users to give them some custom hints to improve lifestyle and daily routine. The pervasiveness of smart things together with advances in cloud robotics can help the robot to perceive and collect more information about the users and the environment. In particular thanks to the miniaturization and low cost of Inertial Measurement Units, in the last years, body-worn activity recognition has gained popularity. In this work, we investigated the performances with an unsupervised approach to recognize eight different gestures performed in daily living wearing a system composed of two inertial sensors placed on the hand and on the wrist. In this context our aim is to evaluate whether the system is able to recognize the gestures in more realistic applications, where is not possible to have a training set. The classification problem was analyzed using two unsupervised approaches (K-Mean and Gaussian Mixture Model), with an intra-subject and an inter-subject analysis, and two supervised approaches (Support Vector Machine and Random Forest), with a 10-fold cross validation analysis and with a Leave-One-Subject-Out analysis to compare the results. The outcomes show that even in an unsupervised context the system is able to recognize the gestures with an averaged accuracy of 0.917 in the K-Mean inter-subject approach and 0.796 in the Gaussian Mixture Model inter-subject one.", "title": "" }, { "docid": "7021db9b0e77b2df2576f0cc5eda8d7d", "text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.", "title": "" }, { "docid": "2d30ed139066b025dcb834737d874c99", "text": "Considerable advances have occurred in recent years in the scientific knowledge of the benefits of breastfeeding, the mechanisms underlying these benefits, and in the clinical management of breastfeeding. This policy statement on breastfeeding replaces the 1997 policy statement of the American Academy of Pediatrics and reflects this newer knowledge and the supporting publications. The benefits of breastfeeding for the infant, the mother, and the community are summarized, and recommendations to guide the pediatrician and other health care professionals in assisting mothers in the initiation and maintenance of breastfeeding for healthy term infants and high-risk infants are presented. The policy statement delineates various ways in which pediatricians can promote, protect, and support breastfeeding not only in their individual practices but also in the hospital, medical school, community, and nation.", "title": "" }, { "docid": "92fdbab17be68e94b2033ef79b41cf0c", "text": "Areas of convergence and divergence between the Narcissistic Personality Inventory (NPI; Raskin & Terry, 1988) and the Pathological Narcissism Inventory (PNI; Pincus et al., 2009) were evaluated in a sample of 586 college students. Summary scores for the NPI and PNI were not strongly correlated (r = .22) but correlations between certain subscales of these two inventories were larger (e.g., r = .71 for scales measuring Exploitativeness). Both measures had a similar level of correlation with the Narcissistic Personality Disorder scale from the Personality Diagnostic Questionnaire-4 (Hyler, 1994) (r = .40 and .35, respectively). The NPI and PNI diverged, however, with respect to their associations with Explicit Self-Esteem. Selfesteem was negatively associated with the PNI but positively associated with the NPI (r = .34 versus r = .26). Collectively, the results highlight the need for precision when discussing the personality characteristics associated with narcissism. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e", "text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.", "title": "" }, { "docid": "818c075d79a51fcab4c38031f14a98ef", "text": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "title": "" }, { "docid": "83e50a2c76217f60057d8bf680a12b92", "text": "[1] Luo, Z. X., Zhou, X. C., David XianFeng, G. U. (2014). From a projective invariant to some new properties of algebraic hypersurfaces.Science China Mathematics, 57(11), 2273-2284. [2] Fan, B., Wu, F., Hu, Z. (2010). Line matching leveraged by point correspondences. IEEE Conference on Computer Vision & Pattern Recognition (Vol.238, pp.390-397). [3] Fan, B., Wu, F., & Hu, Z. (2012). Robust line matching through line–point invariants. Pattern Recognition, 45(2), 794-805. [4] López, J., Santos, R., Fdez-Vidal, X. R., & Pardo, X. M. (2015). Two-view line matching algorithm based on context and appearance in low-textured images. Pattern Recognition, 48(7), 2164-2184. Dalian University of Technology Qi Jia, Xinkai Gao, Xin Fan*, Zhongxuan Luo, Haojie Li,and Ziyao Chen Novel Coplanar Line-points Invariants for Robust Line Matching Across Views", "title": "" }, { "docid": "61dcc07734c98bf0ad01a98fe0c55bf4", "text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.", "title": "" }, { "docid": "5a91b2d8611b14e33c01390181eb1891", "text": "Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.", "title": "" }, { "docid": "e93517eb28df17dddfc63eb7141368f9", "text": "Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.", "title": "" }, { "docid": "77cea98467305b9b3b11de8d3cec6ec2", "text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.", "title": "" }, { "docid": "8405b35a36235ba26444655a3619812d", "text": "Studying the reason why single-layer molybdenum disulfide (MoS2) appears to fall short of its promising potential in flexible nanoelectronics, we find that the nature of contacts plays a more important role than the semiconductor itself. In order to understand the nature of MoS2/metal contacts, we perform ab initio density functional theory calculations for the geometry, bonding, and electronic structure of the contact region. We find that the most common contact metal (Au) is rather inefficient for electron injection into single-layer MoS2 and propose Ti as a representative example of suitable alternative electrode materials.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "a5cb288b5a2f29c22a9338be416a27f7", "text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "82c9c8a7a9dccfa59b09df595de6235c", "text": "Honeypots are closely monitored decoys that are employed in a network to study the trail of hackers and to alert network administrators of a possible intrusion. Using honeypots provides a cost-effective solution to increase the security posture of an organization. Even though it is not a panacea for security breaches, it is useful as a tool for network forensics and intrusion detection. Nowadays, they are also being extensively used by the research community to study issues in network security, such as Internet worms, spam control, DoS attacks, etc. In this paper, we advocate the use of honeypots as an effective educational tool to study issues in network security. We support this claim by demonstrating a set of projects that we have carried out in a network, which we have deployed specifically for running distributed computer security projects. The design of our projects tackles the challenges in installing a honeypot in academic institution, by not intruding on the campus network while providing secure access to the Internet. In addition to a classification of honeypots, we present a framework for designing assignments/projects for network security courses. The three sample honeypot projects discussed in this paper are presented as examples of the framework.", "title": "" }, { "docid": "da4b2452893ca0734890dd83f5b63db4", "text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.", "title": "" }, { "docid": "6fdb3ae03e6443765c72197eb032f4a0", "text": "This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering. An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation. Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain. This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector. Therefore, these algorithms can be implemented very efficiently. Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs. The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a \"standard\" training corpus of speech and feature vectors that represent the current acoustical environment. In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX. In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN). With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone. An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment. The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one. In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate.", "title": "" }, { "docid": "cc5d183cae6251b73e5302b81e4589db", "text": "Digital images in the real world are created by a variety of means and have diverse properties. A photographical natural scene image (NSI) may exhibit substantially different characteristics from a computer graphic image (CGI) or a screen content image (SCI). This casts major challenges to objective image quality assessment, for which existing approaches lack effective mechanisms to capture such content type variations, and thus are difficult to generalize from one type to another. To tackle this problem, we first construct a cross-content-type (CCT) database, which contains 1,320 distorted NSIs, CGIs, and SCIs, compressed using the high efficiency video coding (HEVC) intra coding method and the screen content compression (SCC) extension of HEVC. We then carry out a subjective experiment on the database in a well-controlled laboratory environment. Moreover, we propose a unified content-type adaptive (UCA) blind image quality assessment model that is applicable across content types. A key step in UCA is to incorporate the variations of human perceptual characteristics in viewing different content types through a multi-scale weighting framework. This leads to superior performance on the constructed CCT database. UCA is training-free, implying strong generalizability. To verify this, we test UCA on other databases containing JPEG, MPEG-2, H.264, and HEVC compressed images/videos, and observe that it consistently achieves competitive performance.", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
80548003f403743e8b768531b1051350
Optimizing NoSQL DB on Flash: A Case Study of RocksDB
[ { "docid": "f10660b168700e38e24110a575b5aafa", "text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.", "title": "" } ]
[ { "docid": "b1394b4534d1a2d62767f885c180903b", "text": "OBJECTIVE\nTo determine the value of measuring fetal femur and humerus length at 11-14 weeks of gestation in screening for chromosomal defects.\n\n\nMETHODS\nFemur and humerus lengths were measured using transabdominal ultrasound in 1018 fetuses immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In the group of chromosomally normal fetuses, regression analysis was used to determine the association between long bone length and crown-rump length (CRL). Femur and humerus lengths in fetuses with trisomy 21 were compared with those of normal fetuses.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The karyotype was normal in 920 fetuses and abnormal in 98, including 65 cases of trisomy 21. In the chromosomally normal group the fetal femur and humerus lengths increased significantly with CRL (femur length = - 6.330 + 0.215 x CRL in mm, r = 0.874, P < 0.0001; humerus length = - 6.240 + 0.220 x CRL in mm, r = 0.871, P < 0.0001). In the Bland-Altman plot the mean difference between paired measurements of femur length was 0.21 mm (95% limits of agreement - 0.52 to 0.48 mm) and of humerus length was 0.23 mm (95% limits of agreement - 0.57 to 0.55 mm). In the trisomy 21 fetuses the median femur and humerus lengths were significantly below the appropriate normal mean for CRL by 0.4 and 0.3 mm, respectively (P = 0.002), but they were below the respective 5th centile of the normal range in only six (9.2%) and three (4.6%) of the cases, respectively.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation the femur and humerus lengths in trisomy 21 fetuses are significantly reduced but the degree of deviation from normal is too small for these measurements to be useful in screening for trisomy 21.", "title": "" }, { "docid": "89e0687a467c2e026e40b6bd5633e09a", "text": "Secure two-party computation enables two parties to evaluate a function cooperatively without revealing to either party anything beyond the function’s output. The garbled-circuit technique, a generic approach to secure two-party computation for semi-honest participants, was developed by Yao in the 1980s, but has been viewed as being of limited practical significance due to its inefficiency. We demonstrate several techniques for improving the running time and memory requirements of the garbled-circuit technique, resulting in an implementation of generic secure two-party computation that is significantly faster than any previously reported while also scaling to arbitrarily large circuits. We validate our approach by demonstrating secure computation of circuits with over 109 gates at a rate of roughly 10 μs per garbled gate, and showing order-of-magnitude improvements over the best previous privacy-preserving protocols for computing Hamming distance, Levenshtein distance, Smith-Waterman genome alignment, and AES.", "title": "" }, { "docid": "d84a4c4b678329ddb3a81cc1e55150ab", "text": "This paper describes a Robot-Audition based Car Human Machine Interface (RA-CHMI). A RA-CHMI, like a car navigation system, has difficulty dealing with voice commands, since there are many noise sources in a car, including road noise, air-conditioner, music, and passengers. Microphone array processing developed in robot audition, may overcome this problem. Robot audition techniques, including sound source localization, Voice Activity Detection (VAD), sound source separation, and barge-in-able processing, were introduced by considering the characteristics of RA-CHMI. Automatic Speech Recognition (ASR), based on a Deep Neural Network (DNN), improved recognition performance and robustness in a noisy environment. In addition, as an integrated framework, HARK-Dialog was developed to build a multi-party and multi-modal dialog system, enabling the seamless use of cloud and local services with pluggable modular architecture. The constructed multi-party and multimodal RA-CHMI system did not require a push-to-talk button, nor did it require reducing the audio volume or air-conditioner when issuing speech commands. It could also control a four-DOF robot agent to make the system's responses more understandable. The proposed RA-CHMI was validated by evaluating essential techniques in the system, such as VAD and DNN-ASR, using real speech data recorded during driving. The entire design of the RA-CHMI system, including the system response time and the proper use of cloud/local services, are also discussed.", "title": "" }, { "docid": "fbec9e1a860b41575bbe07e3ce27c8bf", "text": "Two different antennas constructed using a new concept, the slot meander patch (SMP) design, are presented in this study. SMP antennas are designed for fourth-generation long-term evolution (4G LTE) handheld devices. These antennas are used for different target specifications: LTE-Time Division Duplex and LTE-Frequency Division Duplex (LTE TDD and LTE FDD). The first antenna is designed to operate in a wideband of 1.68-3.88 GHz to cover eight LTE TDD application frequency bands. Investigations have shown that the antenna designed with unequal meander widths has a higher efficiency compared to its equivalent antenna design with equal meander widths. The second antenna was configured as a multiband SMP antenna, which operates at three distinct frequency bands (0.5-0.75, 1.1-2.7, and 3.3-3.9 GHz), to cover eight LTE FDD application bands including the lowest and the highest bands. There is a good agreement between the measurement and simulation results for both antennas. Moreover, parametric studies have been carried out to investigate the flexible multiband antenna. Results have shown that the bandwidths can be improved through adjusting the meander widths without changing the SMP length and all other parameters.", "title": "" }, { "docid": "7dcc565c03660fbc1da90164a5cba448", "text": "Do continuous word embeddings encode any useful information for constituency parsing? We isolate three ways in which word embeddings might augment a stateof-the-art statistical parser: by connecting out-of-vocabulary words to known ones, by encouraging common behavior among related in-vocabulary words, and by directly providing features for the lexicon. We test each of these hypotheses with a targeted change to a state-of-the-art baseline. Despite small gains on extremely small supervised training sets, we find that extra information from embeddings appears to make little or no difference to a parser with adequate training data. Our results support an overall hypothesis that word embeddings import syntactic information that is ultimately redundant with distinctions learned from treebanks in other ways.", "title": "" }, { "docid": "fabc65effd31f3bb394406abfa215b3e", "text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).", "title": "" }, { "docid": "3ea5607d04419aae36592b6dcce25304", "text": "Optimization problems with rank constraints arise in many applications, including matrix regression, structured PCA, matrix completion and matrix decomposition problems. An attractive heuristic for solving such problems is to factorize the low-rank matrix, and to run projected gradient descent on the nonconvex factorized optimization problem. The goal of this problem is to provide a general theoretical framework for understanding when such methods work well, and to characterize the nature of the resulting fixed point. We provide a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution. Our results are applicable even when the initial solution is outside any region of local convexity, and even when the problem is globally concave. Working in a non-asymptotic framework, we show that our conditions are satisfied for a wide range of concrete models, including matrix regression, structured PCA, matrix completion with real and quantized observations, matrix decomposition, and graph clustering problems. Simulation results show excellent agreement with the theoretical predictions.", "title": "" }, { "docid": "e49dcbcb0bb8963d4f724513d66dd3a0", "text": "To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit to the other agents’ policies during training, failing to sufficiently generalize during execution. We introduce a new metric, joint-policy correlation, to quantify this effect. We describe an algorithm for general MARL, based on approximate best responses to mixtures of policies generated using deep reinforcement learning, and empirical game-theoretic analysis to compute meta-strategies for policy selection. The algorithm generalizes previous ones such as InRL, iterated best response, double oracle, and fictitious play. Then, we present a scalable implementation which reduces the memory requirement using decoupled meta-solvers. Finally, we demonstrate the generality of the resulting policies in two partially observable settings: gridworld coordination games and poker.", "title": "" }, { "docid": "c4ca4238a0b923820dcc509a6f75849b", "text": "1", "title": "" }, { "docid": "076ab7223de2d7eee7b3875bc2bb82e4", "text": "Firewalls are network devices which enforce an organization’s security policy. Since their development, various methods have been used to implement firewalls. These methods filter network traffic at one or more of the seven layers of the ISO network model, most commonly at the application, transport, and network, and data-link levels. In addition, researchers have developed some newer methods, such as protocol normalization and distributed firewalls, which have not yet been widely adopted. Firewalls involve more than the technology to implement them. Specifying a set of filtering rules, known as a policy, is typically complicated and error-prone. High-level languages have been developed to simplify the task of correctly defining a firewall’s policy. Once a policy has been specified, the firewall needs to be tested to determine if it actually implements the policy correctly. Little work exists in the area of firewall theory; however, this article summarizes what exists. Because some data must be able to pass in and out of a firewall, in order for the protected network to be useful, not all attacks can be stopped by firewalls. Some emerging technologies, such as Virtual Private Networks (VPN) and peer-to-peer networking pose new challenges for firewalls.", "title": "" }, { "docid": "ad9f3510ffaf7d0bdcf811a839401b83", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "e86ee868324e80910d57093c30c5c3f7", "text": "These notes are based on a series of lectures I gave at the Tokyo Institute of Technology from April to July 2005. They constituted a course entitled “An introduction to geometric group theory” totalling about 20 hours. The audience consisted of fourth year students, graduate students as well as several staff members. I therefore tried to present a logically coherent introduction to the subject, tailored to the background of the students, as well as including a number of diversions into more sophisticated applications of these ideas. There are many statements left as exercises. I believe that those essential to the logical developments will be fairly routine. Those related to examples or diversions may be more challenging. The notes assume a basic knowledge of group theory, and metric and topological spaces. We describe some of the fundamental notions of geometric group theory, such as quasi-isometries, and aim for a basic overview of hyperbolic groups. We describe group presentations from first principles. We give an outline description of fundamental groups and covering spaces, sufficient to allow us to illustrate various results with more explicit examples. We also give a crash course on hyperbolic geometry. Again the presentation is rather informal, and aimed at providing a source of examples of hyperbolic groups. This is not logically essential to most of what follows. In principle, the basic theory of hyperbolic groups can be developed with no reference to hyperbolic geometry, but interesting examples would be rather sparse. In order not to interupt the exposition, I have not given references in the main text. We give sources and background material as notes in the final section. I am very grateful for the generous support offered by the Tokyo Insititute of Technology, which allowed me to complete these notes, as well as giving me the freedom to pursue my own research interests. I am indebted to Sadayoshi Kojima for his invitation to spend six months there, and for many interesting conversations. I thank Toshiko Higashi for her constant help in making my stay a very comfortable and enjoyable one. My then PhD student Ken Shackleton accompanied me on my visit, and provided some tutorial assistance. Shigeru Mizushima and Hiroshi Ooyama helped with some matters of translatation etc.", "title": "" }, { "docid": "420659637302d82c616bf719968f2f81", "text": "PURPOSE\nTo update previously summarized estimates of diagnostic accuracy for acute cholecystitis and to obtain summary estimates for more recently introduced modalities.\n\n\nMATERIALS AND METHODS\nA systematic search was performed in MEDLINE, EMBASE, Cochrane Library, and CINAHL databases up to March 2011 to identify studies about evaluation of imaging modalities in patients who were suspected of having acute cholecystitis. Inclusion criteria were explicit criteria for a positive test result, surgery and/or follow-up as the reference standard, and sufficient data to construct a 2 × 2 table. Studies about evaluation of predominantly acalculous cholecystitis in intensive care unit patients were excluded. Bivariate random-effects modeling was used to obtain summary estimates of sensitivity and specificity.\n\n\nRESULTS\nFifty-seven studies were included, with evaluation of 5859 patients. Sensitivity of cholescintigraphy (96%; 95% confidence interval [CI]: 94%, 97%) was significantly higher than sensitivity of ultrasonography (US) (81%; 95% CI: 75%, 87%) and magnetic resonance (MR) imaging (85%; 95% CI: 66%, 95%). There were no significant differences in specificity among cholescintigraphy (90%; 95% CI: 86%, 93%), US (83%; 95% CI: 74%, 89%) and MR imaging (81%; 95% CI: 69%, 90%). Only one study about evaluation of computed tomography (CT) met the inclusion criteria; the reported sensitivity was 94% (95% CI: 73%, 99%) at a specificity of 59% (95% CI: 42%, 74%).\n\n\nCONCLUSION\nCholescintigraphy has the highest diagnostic accuracy of all imaging modalities in detection of acute cholecystitis. The diagnostic accuracy of US has a substantial margin of error, comparable to that of MR imaging, while CT is still underevaluated.", "title": "" }, { "docid": "7a8fb7b1383b7f7562dd319a6f43fcab", "text": "An important problem that online work marketplaces face is grouping clients into clusters, so that in each cluster clients are similar with respect to their hiring criteria. Such a separation allows the marketplace to \"learn\" more accurately the hiring criteria in each cluster and recommend the right contractor to each client, for a successful collaboration. We propose a Maximum Likelihood definition of the \"optimal\" client clustering along with an efficient Expectation-Maximization clustering algorithm that can be applied in large marketplaces. Our results on the job hirings at oDesk over a seven-month period show that our client-clustering approach yields significant gains compared to \"learning\" the same hiring criteria for all clients. In addition, we analyze the clustering results to find interesting differences between the hiring criteria in the different groups of clients.", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" }, { "docid": "7ac1412d56f00fd2defb4220938d9346", "text": "Coingestion of protein with carbohydrate (CHO) during recovery from exercise can affect muscle glycogen synthesis, particularly if CHO intake is suboptimal. Another potential benefit of protein feeding is an increased synthesis rate of muscle proteins, as is well documented after resistance exercise. In contrast, the effect of nutrient manipulation on muscle protein kinetics after aerobic exercise remains largely unexplored. We tested the hypothesis that ingesting protein with CHO after a standardized 2-h bout of cycle exercise would increase mixed muscle fractional synthetic rate (FSR) and whole body net protein balance (WBNB) vs. trials matched for total CHO or total energy intake. We also examined whether postexercise glycogen synthesis could be enhanced by adding protein or additional CHO to a feeding protocol that provided 1.2 g CHO x kg(-1) x h(-1), which is the rate generally recommended to maximize this process. Six active men ingested drinks during the first 3 h of recovery that provided either 1.2 g CHO.kg(-1).h(-1) (L-CHO), 1.2 g CHO + 0.4 g protein x kg(-1) x h(-1) (PRO-CHO), or 1.6 g CHO x kg(-1) x h(-1) (H-CHO) in random order. Based on a primed constant infusion of l-[ring-(2)H(5)]phenylalanine, analysis of biopsies (vastus lateralis) obtained at 0 and 4 h of recovery showed that muscle FSR was higher (P < 0.05) in PRO-CHO (0.09 +/- 0.01%/h) vs. both L-CHO (0.07 +/- 0.01%/h) and H-CHO (0.06 +/- 0.01%/h). WBNB assessed using [1-(13)C]leucine was positive only during PRO-CHO, and this was mainly attributable to a reduced rate of protein breakdown. Glycogen synthesis rate was not different between trials. We conclude that ingesting protein with CHO during recovery from aerobic exercise increased muscle FSR and improved WBNB, compared with feeding strategies that provided CHO only and were matched for total CHO or total energy intake. However, adding protein or additional CHO to a feeding strategy that provided 1.2 g CHO x kg(-1) x h(-1) did not further enhance glycogen resynthesis during recovery.", "title": "" }, { "docid": "70331b25d31da354c14612df08fda33b", "text": "Today, Sales forecasting plays a key role for each business in this competitive environment. The forecasting of sales data in automobile industry has become a primary concern to predict the accuracy in future sales. This work addresses the problem of monthly sales forecasting in automobile industry (maruti car). The data set is based on monthly sales (past 5 year data from 2008 to 2012). Primarily, we used two forecasting methods namely Moving Average and Exponential smoothing to forecast the past data set and then we use these forecasted values as a input for ANFIS (Adaptive Neuro Fuzzy Inference System). Here, MA and ES forecasted values used as input variable for ANFIS to obtain the final accurate sales forecast. Finally we compare our model with two other forecasting models: ANN (Artificial Neural Network) and Linear Regression. Empirical results demonstrate that the ANFIS model gives better results out than other two models.", "title": "" }, { "docid": "d9edc458cee2261b78214132c2e4b811", "text": "Since its discovery, the asymmetric Fano resonance has been a characteristic feature of interacting quantum systems. The shape of this resonance is distinctively different from that of conventional symmetric resonance curves. Recently, the Fano resonance has been found in plasmonic nanoparticles, photonic crystals, and electromagnetic metamaterials. The steep dispersion of the Fano resonance profile promises applications in sensors, lasing, switching, and nonlinear and slow-light devices.", "title": "" }, { "docid": "0dfba09dc9a01e4ebca16eb5688c81aa", "text": "Machine-to-Machine (M2M) refers to technologies with various applications. In order to provide the vision and goals of M2M, an M2M ecosystem with a service platform must be established by the key players in industrial domains so as to substantially reduce development costs and improve time to market of M2M devices and services. The service platform must be supported by M2M enabling technologies and standardization. In this paper, we present a survey of existing M2M service platforms and explore the various research issues and challenges involved in enabling an M2M service platform. We first classify M2M nodes according to their characteristics and required functions, and we then highlight the features of M2M traffic. With these in mind, we discuss the necessity of M2M platforms. By comparing and analyzing the existing approaches and solutions of M2M platforms, we identify the requirements and functionalities of the ideal M2M service platform. Based on these, we propose an M2M service platform (M2SP) architecture and its functionalities, and present the M2M ecosystem with this platform. Different application scenarios are given to illustrate the interaction between the components of the proposed platform. In addition, we discuss the issues and challenges of enabling technologies and standardization activities, and outline future research directions for the M2M network.", "title": "" }, { "docid": "2372c664173be9aa8c2497b42703a80e", "text": "Medical devices have a great impact but rigorous production and quality norms to meet, which pushes manufacturing technology to its limits in several fields, such as electronics, optics, communications, among others. This paper briefly explores how the medical industry is absorbing many of the technological developments from other industries, and making an effort to translate them into the healthcare requirements. An example is discussed in depth: implantable neural microsystems used for brain circuits mapping and modulation. Conventionally, light sources and electrical recording points are placed on silicon neural probes for optogenetic applications. The active sites of the probe must provide enough light power to modulate connectivity between neural networks, and simultaneously ensure reliable recordings of action potentials and local field activity. These devices aim at being a flexible and scalable technology capable of acquiring knowledge about neural mechanisms. Moreover, this paper presents a fabrication method for 2-D LED-based microsystems with high aspect-ratio shafts, capable of reaching up to 20 mm deep neural structures. In addition, PDMS $\\mu $ lenses on LEDs top surface are presented for focusing and increasing light intensity on target structures.", "title": "" } ]
scidocsrr
4640211701dd9e1c4bd980c17d726d1f
Design of patch array antennas for future 5G applications
[ { "docid": "e541be7c81576fdef564fd7eba5d67dd", "text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.", "title": "" }, { "docid": "136fadcc21143fd356b48789de5fb2b0", "text": "Cost-effective and scalable wireless backhaul solutions are essential for realizing the 5G vision of providing gigabits per second anywhere. Not only is wireless backhaul essential to support network densification based on small cell deployments, but also for supporting very low latency inter-BS communication to deal with intercell interference. Multiplexing backhaul and access on the same frequency band (in-band wireless backhaul) has obvious cost benefits from the hardware and frequency reuse perspective, but poses significant technology challenges. We consider an in-band solution to meet the backhaul and inter-BS coordination challenges that accompany network densification. Here, we present an analysis to persuade the readers of the feasibility of in-band wireless backhaul, discuss realistic deployment and system assumptions, and present a scheduling scheme for inter- BS communications that can be used as a baseline for further improvement. We show that an inband wireless backhaul for data backhauling and inter-BS coordination is feasible without significantly hurting the cell access capacities.", "title": "" }, { "docid": "c8a27aecd6f356bfdaeb7c33558843df", "text": "Wireless communications today enables us to connect devices and people for an unprecedented exchange of multimedia and data content. The data rates of wireless communications continue to increase, mainly driven by innovation in electronics. Once the latency of communication systems becomes low enough to enable a round-trip delay from terminals through the network back to terminals of approximately 1 ms, an overlooked breakthrough?human tactile to visual feedback control?will change how humans communicate around the world. Using these controls, wireless communications can be the platform for enabling the control and direction of real and virtual objects in many situations of our life. Almost no area of the economy will be left untouched, as this new technology will change health care, mobility, education, manufacturing, smart grids, and much more. The Tactile Internet will become a driver for economic growth and innovation and will help bring a new level of sophistication to societies.", "title": "" } ]
[ { "docid": "5f17fc08df06a614c981a979ce9c36e1", "text": "Performing smart computations in a context of cloud computing and big data is highly appreciated today. It allows customers to fully benefit from cloud computing capacities (such as processing or storage) without losing confidentiality of sensitive data. Fully homomorphic encryption (FHE) is a smart category of encryption schemes that enables working with the data in its encrypted form. It permits us to preserve confidentiality of our sensible data and to benefit from cloud computing capabilities. While FHE is combined with verifiable computation, it offers efficient procedures for outsourcing computations over encrypted data to a remote, but non-trusted, cloud server. The resulting scheme is called Verifiable Fully Homomorphic Encryption (VFHE). Currently, it has been demonstrated by many existing schemes that the theory is feasible but the efficiency needs to be dramatically improved in order to make it usable for real applications. One subtle difficulty is how to efficiently handle the noise. This paper aims to introduce an efficient and symmetric verifiable FHE based on a new mathematic structure that is noise free. In our encryption scheme, the noise is constant and does not depend on homomorphic evaluation of ciphertexts. The homomorphy of our scheme is obtained from simple matrix operations (addition and multiplication). The running time of the multiplication operation of our encryption scheme in a cloud environment has an order of a few milliseconds.", "title": "" }, { "docid": "1421fb35904ce187fb7f98faab8f5fcc", "text": "Although the lung is the most common site of extrahepatic metastases from hepatocellular carcinoma (HCC), the optimal treatment for such metastases has’nt been established. External beam radiotherapy (EBRT) is becoming a useful local control therapy for lung cancer. To evaluated the efficacy of EBRT treatment for such metastases, we retrospectively studied 13 patients (11 men and 2 women; mean age, 52.6 years) with symptomatic pulmonary metastases from HCC who had been treated with EBRT in our institution. The palliative radiation dose delivered to the lung lesions ranged from 47 to 60 Gy (median 50) in conventional fractions, while the intrahepatic lesions were treated with surgery or transarterial chemoembolization, and/or EBRT. Follow-up period from radiotherapy ranged from 3.7 to 49.1 months (median, 16.7). Among the 13 patients, 23 out of a total of 31 pulmonary metastatic lesions received EBRT. In 12/13(92.3%) patients, significant symptoms were completely or partially relieved. An objective response was observed in 10/13(76.9%) of the subjects by computed tomography imaging. The median progression-free survival for all patients was 13.4 months. The 2-year survival rate from pulmonary metastasis was 70.7%. Adverse effects were mild and consisted of bone marrow suppression in three patients and pleural effusion in one patient (all CTCAE Grade II). In conclusion, EBRT with ≤60 Gy appears to be a good palliative therapy with reasonable safety for patients with pulmonary metastases from HCC. However, large-scale randomized clinical trials will be necessary to confirm the therapeutic role of this method.", "title": "" }, { "docid": "5238ae08b15854af54274e1c2b118d54", "text": "One-dimensional fractional anomalous sub-diffusion equations on an unbounded domain are considered in our work. Beginning with the derivation of the exact artificial boundary conditions, the original problem on an unbounded domain is converted into mainly solving an initial-boundary value problem on a finite computational domain. The main contribution of our work, as compared with the previous work, lies in the reduction of fractional differential equations on an unbounded domain by using artificial boundary conditions and construction of the corresponding finite difference scheme with the help of method of order reduction. The difficulty is the treatment of Neumann condition on the artificial boundary, which involves the time-fractional derivative operator. The stability and convergence of the scheme are proven using the discrete energy method. Two numerical examples clarify the effectiveness and accuracy of the proposed method. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "682ac189fe3fdcb602e1a361f957220a", "text": "Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems. While numerous technologies have been developed for supporting event-based interactions over local-area networks, these technologies do not scale well to wide-area networks such as the Internet. Wide-area networks pose new challenges that have to be attacked with solutions that specifically address issues of scalability. This paper presents Siena, a scalable event notification service that is based on a distributed architecture of event servers. We first present a formally defined interface that is based on an extension to the publish/subscribe protocol. We then describe and compare several different server topologies and routing algorithms. We conclude by briefly discussing related work, our experience with an initial implementation of Siena, and a framework for evaluating the scalability of event notification services such as Siena.", "title": "" }, { "docid": "2657e5090896cc7dc01f3b66d2d97a94", "text": "In this article, we review gas sensor application of one-dimensional (1D) metal-oxide nanostructures with major emphases on the types of device structure and issues for realizing practical sensors. One of the most important steps in fabricating 1D-nanostructure devices is manipulation and making electrical contacts of the nanostructures. Gas sensors based on individual 1D nanostructure, which were usually fabricated using electron-beam lithography, have been a platform technology for fundamental research. Recently, gas sensors with practical applicability were proposed, which were fabricated with an array of 1D nanostructures using scalable micro-fabrication tools. In the second part of the paper, some critical issues are pointed out including long-term stability, gas selectivity, and room-temperature operation of 1D-nanostructure-based metal-oxide gas sensors.", "title": "" }, { "docid": "e2009f56982f709671dcfe43048a8919", "text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.", "title": "" }, { "docid": "98a65cca7217dfa720dd4ed2972c3bdd", "text": "Intramuscular fat percentage (IMF%) has been shown to have a positive influence on the eating quality of red meat. Selection of Australian lambs for increased lean tissue and reduced carcass fatness using Australian Sheep Breeding Values has been shown to decrease IMF% of the Muscularis longissimus lumborum. The impact this selection has on the IMF% of other muscle depots is unknown. This study examined IMF% in five different muscles from 400 lambs (M. longissimus lumborum, Muscularis semimembranosus, Muscularis semitendinosus, Muscularis supraspinatus, Muscularis infraspinatus). The sires of these lambs had a broad range in carcass breeding values for post-weaning weight, eye muscle depth and fat depth over the 12th rib (c-site fat depth). Results showed IMF% to be highest in the M. supraspinatus (4.87 ± 0.1, P<0.01) and lowest in the M. semimembranosus (3.58 ± 0.1, P<0.01). Hot carcass weight was positively associated with IMF% of all muscles. Selection for decreasing c-site fat depth reduced IMF% in the M. longissimus lumborum, M. semimembranosus and M. semitendinosus. Higher breeding values for post-weaning weight and eye muscle depth increased and decreased IMF%, respectively, but only in the lambs born as multiples and raised as singles. For each per cent increase in lean meat yield percentage (LMY%), there was a reduction in IMF% of 0.16 in all five muscles examined. Given the drive within the lamb industry to improve LMY%, our results indicate the importance of continued monitoring of IMF% throughout the different carcass regions, given its importance for eating quality.", "title": "" }, { "docid": "375ff8dcd4e29eef317ee0838820c944", "text": "Due to continuous concerns about environmental pollution and a possible energy shortage, renewable energy systems, based mainly on wind power, solar energy, small hydro-electric power, etc have been implemented. Wind energy seems certain to play a major part in the world's energy future. In spite of sudden wind speed variations, wind farm generators should always be capable of extracting the maximum possible mechanical power from the wind and turning it into electrical power. Nowadays, most of the installed wind turbines are based on doubly-fed induction generators (DFIGs), wound rotor synchronous generators (WRSG) and permanent magnet synchronous generators (PMSGs). The DFIG equipped wind turbine has several advantages over others. One of which, the power converter in such wind turbines only deals with rotor power, hence the converter rating can run at reduced power rating. However DFIG has the famous disadvantage of the presence of slip rings which leads to increased maintenance costs and reduced life-time. Hence, brushless doubly fed induction machines (BDFIMs) can be considered as a viable alternative. In this paper, the brushless doubly fed twin stator induction generator (BDFTSIG) is modeled in details. A wind energy conversion system (WECS) utilizing a proposed indirect vector controlled BDFTSIG is presented. The proposed controller performance is investigated under various loading conditions showing enhanced transient and minimal steady state oscillations in addition to complete active/reactive power decoupling.", "title": "" }, { "docid": "e3461568f90b10dcbe05f1228b4a8614", "text": "A 2.4 GHz band high-efficiency RF rectifier and high sensitive dc voltage sensing circuit is implemented. A passive RF to DC rectifier of multiplier voltage type has no current consumption. This rectifier is using native threshold voltage diode-connected NMOS transistors to avoid the power loss due to the threshold voltage. It consumes only 900nA with 1.5V supply voltage adopting ultra low power DC sensing circuit using subthreshold current reference. These block incorporates a digital demodulation logic blocks. It can recognize OOK digital information and existence of RF input signal above sensitivity level or not. A low power RF rectifier and DC sensing circuit was fabricated in 0.18um CMOS technology with native threshold voltage NMOS; This RF wake up receiver has -28dBm sensitivity at 2.4 GHz band.", "title": "" }, { "docid": "3a7657130cb165682cc2e688a7e7195b", "text": "The functional simulator Simics provides a co-simulation integration path with a SystemC simulation environment to create Virtual Platforms. With increasing complexity of the SystemC models, this platform suffers from performance degradation due to the single threaded nature of the integrated Virtual Platform. In this paper, we present a multi-threaded Simics SystemC platform solution that significantly improves performance over the existing single threaded solution. The two schedulers run independently, only communicating in a thread safe manner through a message interface. Simics based logging and checkpointing are preserved within SystemC and tied to the corresponding Simics' APIs for a seamless experience. The solution also scales to multiple SystemC models within the platform, each running its own thread with an instantiation of the SystemC kernel. A second multi-cell solution is proposed providing comparable performance with the multi-thread solution, but reducing the burden of integration on the SystemC model. Empirical data is presented showing performance gains over the legacy single threaded solution.", "title": "" }, { "docid": "fb048df280c08a4d80eb18bafb36e6c7", "text": "There are very few reported cases of traumatic amputation of the male genitalia due to animal bite. The management involves thorough washout of the wounds, debridement, antibiotic prophylaxis, tetanus and rabies immunization followed by immediate reconstruction or primary wound closure with delayed reconstruction, when immediate reconstruction is not feasible. When immediate reconstruction is not feasible, long-term good functional and cosmetic results are still possible in the majority of cases by performing total phallic reconstruction. In particular, it is now possible to fashion a cosmetically acceptable sensate phallus with incorporated neourethra, to allow the patient to void while standing and to ejaculate, and with enough bulk to allow the insertion of a penile prosthesis to guarantee the rigidity necessary to engage in penetrative sexual intercourse.", "title": "" }, { "docid": "c47fde74be75b5e909d7657bb64bf23d", "text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders", "title": "" }, { "docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94", "text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.", "title": "" }, { "docid": "55285f99e1783bcba47ab41e56171026", "text": "Two different formal definitions of gray-scale reconstruction are presented. The use of gray-scale reconstruction in various image processing applications discussed to illustrate the usefulness of this transformation for image filtering and segmentation tasks. The standard parallel and sequential approaches to reconstruction are reviewed. It is shown that their common drawback is their inefficiency on conventional computers. To improve this situation, an algorithm that is based on the notion of regional maxima and makes use of breadth-first image scannings implemented using a queue of pixels is introduced. Its combination with the sequential technique results in a hybrid gray-scale reconstruction algorithm which is an order of magnitude faster than any previously known algorithm.", "title": "" }, { "docid": "06a1d90991c5a9039c6758a66205e446", "text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.", "title": "" }, { "docid": "b876e62db8a45ab17d3a9d217e223eb7", "text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.", "title": "" }, { "docid": "412e10ae26c0abcb37379c6b37ea022a", "text": "This paper presents the Gavagai Living Lexicon, which is an online distributional semantic model currently available in 20 different languages. We describe the underlying distributional semantic model, and how we have solved some of the challenges in applying such a model to large amounts of streaming data. We also describe the architecture of our implementation, and discuss how we deal with continuous quality assurance of the lexicon.", "title": "" }, { "docid": "5c716fbdc209d5d9f703af1e88f0d088", "text": "Protecting visual secrets is an important problem due to the prevalence of cameras that continuously monitor our surroundings. Any viable solution to this problem should also minimize the impact on the utility of applications that use images. In this work, we build on the existing work of adversarial learning to design a perturbation mechanism that jointly optimizes privacy and utility objectives. We provide a feasibility study of the proposed mechanism and present ideas on developing a privacy framework based on the adversarial perturbation mechanism.", "title": "" }, { "docid": "080c1666b7324bef25347496db11fb28", "text": "As the technical skills and costs associated with the deployment of phishing attacks decrease, we are witnessing an unprecedented level of scams that push the need for better methods to proactively detect phishing threats. In this work, we explored the use of URLs as input for machine learning models applied for phishing site prediction. In this way, we compared a feature-engineering approach followed by a random forest classifier against a novel method based on recurrent neural networks. We determined that the recurrent neural network approach provides an accuracy rate of 98.7% even without the need of manual feature creation, beating by 5% the random forest method. This means it is a scalable and fast-acting proactive detection system that does not require full content analysis.", "title": "" }, { "docid": "f281b48aba953acc8778aecf35ab310d", "text": "This paper presents a new deep learning architecture for Natural Language Inference (NLI). Firstly, we introduce a new architecture where alignment pairs are compared, compressed and then propagated to upper layers for enhanced representation learning. Secondly, we adopt factorization layers for efficient and expressive compression of alignment vectors into scalar features, which are then used to augment the base word representations. The design of our approach is aimed to be conceptually simple, compact and yet powerful. We conduct experiments on three popular benchmarks, SNLI, MultiNLI and SciTail, achieving competitive performance on all. A lightweight parameterization of our model also enjoys a≈ 3 times reduction in parameter size compared to the existing state-of-the-art models, e.g., ESIM and DIIN, while maintaining competitive performance. Additionally, visual analysis shows that our propagated features are highly interpretable.", "title": "" } ]
scidocsrr
cf0fe5c9c997d68774acdd4659d308ac
Accurate and Novel Recommendations: An Algorithm Based on Popularity Forecasting
[ { "docid": "45f8c4e3409f8b27221e45e6c3485641", "text": "In recent years, time information is more and more important in collaborative filtering (CF) based recommender system because many systems have collected rating data for a long time, and time effects in user preference is stronger. In this paper, we focus on modeling time effects in CF and analyze how temporal features influence CF. There are four main types of time effects in CF: (1) time bias, the interest of whole society changes with time; (2) user bias shifting, a user may change his/her rating habit over time; (3) item bias shifting, the popularity of items changes with time; (4) user preference shifting, a user may change his/her attitude to some types of items. In this work, these four time effects are used by factorized model, which is called TimeSVD. Moreover, many other time effects are used by simple methods. Our time-dependent models are tested on Netflix data from Nov. 1999 to Dec. 2005. Experimental results show that prediction accuracy in CF can be improved significantly by using time information.", "title": "" }, { "docid": "af7584c0067de64024d364e321af133b", "text": "Recommendation systems have wide-spread applications in both academia and industry. Traditionally, performance of recommendation systems has been measured by their precision. By introducing novelty and diversity as key qualities in recommender systems, recently increasing attention has been focused on this topic. Precision and novelty of recommendation are not in the same direction, and practical systems should make a trade-off between these two quantities. Thus, it is an important feature of a recommender system to make it possible to adjust diversity and accuracy of the recommendations by tuning the model. In this paper, we introduce a probabilistic structure to resolve the diversity–accuracy dilemma in recommender systems. We propose a hybrid model with adjustable level of diversity and precision such that one can perform this by tuning a single parameter. The proposed recommendation model consists of two models: one for maximization of the accuracy and the other one for specification of the recommendation list to tastes of users. Our experiments on two real datasets show the functionality of the model in resolving accuracy–diversity dilemma and outperformance of the model over other classic models. The proposed method could be extensively applied to real commercial systems due to its low computational complexity and significant performance.", "title": "" } ]
[ { "docid": "6e07a006d4e34f35330c74116762a611", "text": "Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest.", "title": "" }, { "docid": "541075ddb29dd0acdf1f0cf3784c220a", "text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the stateof-the-arts performance. 1", "title": "" }, { "docid": "a4f2a82daf86314363ceeac34cba7ed9", "text": "As a vital task in natural language processing, relation classification aims to identify relation types between entities from texts. In this paper, we propose a novel Att-RCNN model to extract text features and classify relations by combining recurrent neural network (RNN) and convolutional neural network (CNN). This network structure utilizes RNN to extract higher level contextual representations of words and CNN to obtain sentence features for the relation classification task. In addition to this network structure, both word-level and sentence-level attention mechanisms are employed in Att-RCNN to strengthen critical words and features to promote the model performance. Moreover, we conduct experiments on four distinct datasets: SemEval-2010 task 8, SemEval-2018 task 7 (two subtask datasets), and KBP37 dataset. Compared with the previous public models, Att-RCNN has the overall best performance and achieves the highest $F_{1}$ score, especially on the KBP37 dataset.", "title": "" }, { "docid": "ed0b19511e0c8fa14a9a089a72bb5145", "text": "We leverage crowd wisdom for multiple-choice question answering, and employ lightweight machine learning techniques to improve the aggregation accuracy of crowdsourced answers to these questions. In order to develop more effective aggregation methods and evaluate them empirically, we developed and deployed a crowdsourced system for playing the “Who wants to be a millionaire?” quiz show. Analyzing our data (which consist of more than 200,000 answers), we find that by just going with the most selected answer in the aggregation, we can answer over 90% of the questions correctly, but the success rate of this technique plunges to 60% for the later/harder questions in the quiz show. To improve the success rates of these later/harder questions, we investigate novel weighted aggregation schemes for aggregating the answers obtained from the crowd. By using weights optimized for reliability of participants (derived from the participants’ confidence), we show that we can pull up the accuracy rate for the harder questions by 15%, and to overall 95% average accuracy. Our results provide a good case for the benefits of applying machine learning techniques for building more accurate crowdsourced question answering systems.", "title": "" }, { "docid": "9b8a9c94e626e3932dd4a19cb6a5cf4c", "text": "Most existing computer and network systems authenticate a user only at the initial login session. This could be a critical security weakness, especially for high-security systems because it enables an impostor to access the system resources until the initial user logs out. This situation is encountered when the logged in user takes a short break without logging out or an impostor coerces the valid user to allow access to the system. To address this security flaw, we propose a continuous authentication scheme that continuously monitors and authenticates the logged in user. Previous methods for continuous authentication primarily used hard biometric traits, specifically fingerprint and face to continuously authenticate the initial logged in user. However, the use of these biometric traits is not only inconvenient to the user, but is also not always feasible due to the user's posture in front of the sensor. To mitigate this problem, we propose a new framework for continuous user authentication that primarily uses soft biometric traits (e.g., color of user's clothing and facial skin). The proposed framework automatically registers (enrolls) soft biometric traits every time the user logs in and fuses soft biometric matching with the conventional authentication schemes, namely password and face biometric. The proposed scheme has high tolerance to the user's posture in front of the computer system. Experimental results show the effectiveness of the proposed method for continuous user authentication.", "title": "" }, { "docid": "a9f8c6d1d10bedc23b100751c607f7db", "text": "Successful efforts in hand gesture recognition research within the last two decades paved the path for natural human–computer interaction systems. Unresolved challenges such as reliable identification of gesturing phase, sensitivity to size, shape, and speed variations, and issues due to occlusion keep hand gesture recognition research still very active. We provide a review of vision-based hand gesture recognition algorithms reported in the last 16 years. The methods using RGB and RGB-D cameras are reviewed with quantitative and qualitative comparisons of algorithms. Quantitative comparison of algorithms is done using a set of 13 measures chosen from different attributes of the algorithm and the experimental methodology adopted in algorithm evaluation. We point out the need for considering these measures together with the recognition accuracy of the algorithm to predict its success in real-world applications. The paper also reviews 26 publicly available hand gesture databases and provides the web-links for their download. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7d7412baa5f23d4e710e6be26eee2b20", "text": "Result diversification has recently attracted much attention as a means of increasing user satisfaction in recommender systems and web search. Many different approaches have been proposed in the related literature for the diversification problem. In this paper, we survey, classify and comparatively study the various definitions, algorithms and metrics for result diversification.", "title": "" }, { "docid": "e6c0aa517c857ed217fc96aad58d7158", "text": "Conjoined twins, popularly known as Siamese twins, result from aberrant embryogenesis [1]. It is a rare presentation with an incidence of 1 in 50,000 births. Since 60% of these cases are still births, so the true incidence is estimated to be approximately 1 in 200,000 births [2-4]. This disorder is more common in females with female to male ratio of 3:1 [5]. Conjoined twins are classified based on their site of attachment with a suffix ‘pagus’ which is a Greek term meaning “fixed”. The main types of conjoined twins are omphalopagus (abdomen), thoracopagus (thorax), cephalopagus (ventrally head to umbilicus), ischipagus (pelvis), parapagus (laterally body side), craniopagus (head), pygopagus (sacrum) and rachipagus (vertebral column) [6]. Cephalophagus is an extremely rare variant of conjoined twins with an incidence of 11% among all cases. These types of twins are fused at head, thorax and upper abdominal cavity. They are pre-dominantly of two types: Janiceps (two faces are on the either side of the head) or non Janiceps type (normal single head and face). We hereby report a case of non janiceps cephalopagus conjoined twin, which was diagnosed after delivery.", "title": "" }, { "docid": "9bc681a751d8fe9e2c93204ea06786b8", "text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "b69aae02d366b75914862f5bc726c514", "text": "Nitrification in commercial aquaculture systems has been accomplished using many different technologies (e.g. trickling filters, fluidized beds and rotating biological contactors) but commercial aquaculture systems have been slow to adopt denitrification. Denitrification (conversion of nitrate, NO3 − to nitrogen gas, N2) is essential to the development of commercial, closed, recirculating aquaculture systems (B1 water turnover 100 day). The problems associated with manually operated denitrification systems have been incomplete denitrification (oxidation–reduction potential, ORP\\−200 mV) with the production of nitrite (NO2 ), nitric oxide (NO) and nitrous oxide (N2O) or over-reduction (ORPB−400 mV), resulting in the production of hydrogen sulfide (H2S). The need for an anoxic or anaerobic environment for the denitrifying bacteria can also result in lowered dissolved oxygen (DO) concentrations in the rearing tanks. These problems have now been overcome by the development of a computer automated denitrifying bioreactor specifically designed for aquaculture. The prototype bioreactor (process control version) has been in operation for 4 years and commercial versions of the bioreactor are now in continuous use; these bioreactors can be operated in either batch or continuous on-line modes, maintaining NO3 − concentrations below 5 ppm. The bioreactor monitors DO, ORP, pH and water flow rate and controls water pump rate and carbon feed rate. A fuzzy logic-based expert system replaced the classical process control system for operation of the bioreactor, continuing to optimize denitrification rates and eliminate discharge of toxic by-products (i.e. NO2 , NO, N2O or www.elsevier.nl/locate/aqua-online * Corresponding author. Tel.: +1-409-7722133; fax: +1-409-7726993. E-mail address: pglee@utmb.edu (P.G. Lee) 0144-8609/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S0144 -8609 (00 )00046 -7 38 P.G. Lee et al. / Aquacultural Engineering 23 (2000) 37–59 H2S). The fuzzy logic rule base was composed of \\40 fuzzy rules; it took into account the slow response time of the system. The fuzzy logic-based expert system maintained nitrate-nitrogen concentration B5 ppm while avoiding any increase in NO2 or H2S concentrations. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "946c81bc2361e826904c8370fc00167f", "text": "This paper describes the CMCRC systems entered in the TAC 2010 entity linking challenge. The best performing system we describe implements the document-level entity linking system from Cucerzan (2007), with several additions that exploit global information. Our implementation of Cucerzan’s method achieved a score of 74.9% in development experiments. Additional global information improves performance to 78.4%. On the TAC 2010 test data, our best system achieves a score of 84.4%, which is second in the overall rankings of submitted systems.", "title": "" }, { "docid": "56a96e6052e04121cfc7fb9008775d15", "text": "We consider the level of information security provided by random linear network coding in network scenarios in which all nodes comply with the communication protocols yet are assumed to be potential eavesdroppers (i.e. \"nice but curious\"). For this setup, which differs from wiretapping scenarios considered previously, we develop a natural algebraic security criterion, and prove several of its key properties. A preliminary analysis of the impact of network topology on the overall network coding security, in particular for complete directed acyclic graphs, is also included.", "title": "" }, { "docid": "9458b13e5a87594140d7ee759e06c76c", "text": "Digital ecosystem, as a neoteric terminology, has emerged along with the appearance of Business Ecosystem which is a form of naturally existing business network of small and medium enterprises. However, few researches have been found in the field of defining digital ecosystem. In this paper, by means of ontology technology as our research methodology, we propose to develop a conceptual model for digital ecosystem. By introducing an innovative ontological notation system, we create the hierarchical framework of digital ecosystem form up to down, based on the related theories form Digital ecosystem and business intelligence institute.", "title": "" }, { "docid": "e2d0a4d2c2c38722d9e9493cf506fc1c", "text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <dongbin@math.pku.edu.cn>, Quanzheng Li <Li.Quanzheng@mgh.harvard.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "e5ecbd3728e93badd4cfbf5eef6957f9", "text": "Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.", "title": "" }, { "docid": "49d3548babbc17cf265c60745dbea1a0", "text": "OBJECTIVE\nTo evaluate the role of transabdominal three-dimensional (3D) ultrasound in the assessment of the fetal brain and its potential for routine neurosonographic studies.\n\n\nMETHODS\nWe studied prospectively 202 consecutive fetuses between 16 and 24 weeks' gestation. A 3D ultrasound volume of the fetal head was acquired transabdominally. The entire brain anatomy was later analyzed using the multiplanar images by a sonologist who was expert in neonatal cranial sonography. The quality of the conventional planes obtained (coronal, sagittal and axial, at different levels) and the ability of the 3D multiplanar neuroscan to visualize properly the major anatomical structures of the brain were evaluated.\n\n\nRESULTS\nAcceptable cerebral multiplanar images were obtained in 92% of the cases. The corpus callosum could be seen in 84% of the patients, the fourth ventricle in 78%, the lateral sulcus (Sylvian fissure) in 86%, the cingulate sulcus in 75%, the cerebellar hemispheres in 98%, the cerebellar vermis in 92%, the medulla oblongata in 97% and the cavum vergae in 9% of them. The thalami and the cerebellopontine cistern (cisterna magna) were identified in all cases. At or beyond 20 weeks, superior visualization (in > 90% of cases) was achieved of the cerebral fissures, the corpus callosum (97%), the supracerebellar cisterns (92%) and the third ventricle (93%). Some cerebral fissures were seen initially at 16-17 weeks.\n\n\nCONCLUSION\nMultiplanar images obtained by transabdominal 3D ultrasound provide a simple and effective approach for detailed evaluation of the fetal brain anatomy. This technique has the potential to be used in the routine fetal anomaly scan.", "title": "" } ]
scidocsrr
af1fab102399874c4db81ab4ba22d91d
Newer Understanding of Specific Anatomic Targets in the Aging Face as Applied to Injectables: Aging Changes in the Craniofacial Skeleton and Facial Ligaments.
[ { "docid": "36f3596c64ba154e725abe5ed5cc43df", "text": "In this article, which focuses on concepts rather than techniques, the author emphasizes that the best predictor of a good facelift outcome is an already attractive face that has good enough tissue quality to maintain a result past the swelling stage. The author notes that too often, surgeons gravitate toward a particular facial support technique and use it all the time, to often unsatisfactory results. He singles out different areas (the brows, the tear trough, the cheeks, and so forth) and shows how the addition of volume may give results better than traditional methods. As he points out, a less limited and ritualistic approach to the face seems to be how cosmetic surgery is evolving; all factors that might make a face better are reasonable to entertain.", "title": "" } ]
[ { "docid": "dc2770a8318dd4aa1142efebe5547039", "text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.", "title": "" }, { "docid": "7f65b9d7d07eee04405fc7102bd51f71", "text": "Researchers tend to cite highly cited articles, but how these highly cited articles influence the citing articles has been underexplored. This paper investigates how one highly cited essay, Hirsch’s “h-index” article (H-article) published in 2005, has been cited by other articles. Content-based citation analysis is applied to trace the dynamics of the article’s impact changes from 2006 to 2014. The findings confirm that citation context captures the changing impact of the H-article over time in several ways. In the first two years, average citation mention of H-article increased, yet continued to decline with fluctuation until 2014. In contrast with citation mention, average citation count stayed the same. The distribution of citation location over time also indicates three phases of the H-article “Discussion,” “Reputation,” and “Adoption” we propose in this study. Based on their locations in the citing articles and their roles in different periods, topics of citation context shifted gradually when an increasing number of other articles were co-mentioned with the H-article in the same sentences. These outcomes show that the impact of the H-article manifests in various ways within the content of these citing articles that continued to shift in nine years, data that is not captured by traditional means of citation analysis that do not weigh citation impacts over time.", "title": "" }, { "docid": "bdb4aba2b34731ffdf3989d6d1186270", "text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.", "title": "" }, { "docid": "4026a27bedea22a0115912cc1a384bf2", "text": "This brief presents an ultralow-voltage multistage rectifier built with standard threshold CMOS for energy-harvesting applications. A threshold-compensated diode (TCD) is developed to minimize the forward voltage drop while maintaining low reverse leakage flow. In addition, an interstage compensation scheme is proposed that enables efficient power conversion at input amplitudes below the diode threshold. The new rectifier also features an inherent temperature and process compensation mechanism, which is achieved by precisely tracking the diode threshold by an auxiliary dummy. Although the design is optimized for an ac input at 13.56 MHz, the presented enhancement techniques are also applicable for low- or ultrahigh-frequency energy scavengers. The rectifier prototype is fabricated in a 0.35-μm four-metal two-poly standard CMOS process with the worst-case threshold voltage of 600 mV/- 780 mV for nMOS/pMOS, respectively. With a 13.56 MHz input of a 500 mV amplitude, the rectifier is able to deliver more than 35 μW at 2.5 V VDD, and the measured deviation in the output voltage is as low as 180 mV over 100°C for a cascade of ten TCDs.", "title": "" }, { "docid": "4f90f6a836b775e1c7026bff7241a94e", "text": "The Solar Shirt is a wearable computing design concept and demo in the area of sustainable and ecological design. The Solar Shirt showcases a concept, which detects the level of noise pollution in the wearer's environment and illustrates it with a garment-integrated display. In addition, the design concept utilizes printed electronic solar cells as part of the garment design, illustrating a design vision towards zero energy wearable computing. The Solar Shirt uses reindeer leather as its main material, giving a soft and luxurious feeling to the garment. The material selections and the style of the garment derive their inspiration from Arctic Design, reflecting the purity of nature and the simplicity and silence of a snowy world.", "title": "" }, { "docid": "cd7210c8c9784bdf56fe72acb4f9e8e2", "text": "Many-objective (four or more objectives) optimization problems pose a great challenge to the classical Pareto-dominance based multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and SPEA2. This is mainly due to the fact that the selection pressure based on Pareto-dominance degrades severely with the number of objectives increasing. Very recently, a reference-point based NSGA-II, referred as NSGA-III, is suggested to deal with many-objective problems, where the maintenance of diversity among population members is aided by supplying and adaptively updating a number of well-spread reference points. However, NSGA-III still relies on Pareto-dominance to push the population towards Pareto front (PF), leaving room for the improvement of its convergence ability. In this paper, an improved NSGA-III procedure, called θ-NSGA-III, is proposed, aiming to better tradeoff the convergence and diversity in many-objective optimization. In θ-NSGA-III, the non-dominated sorting scheme based on the proposed θ-dominance is employed to rank solutions in the environmental selection phase, which ensures both convergence and diversity. Computational experiments have shown that θ-NSGA-III is significantly better than the original NSGA-III and MOEA/D on most instances no matter in convergence and overall performance.", "title": "" }, { "docid": "77278e6ba57e82c88f66bd9155b43a50", "text": "Up to the time when a huge corruption scandal, popularly labeled tangentopoli”(bribe city), brought down the political establishment that had ruled Italy for several decades, that country had reported one of the largest shares of capital spending in GDP among the OECD countries. After the scandal broke out and several prominent individuals were sent to jail, or even committed suicide, capital spending fell sharply. The fall seems to have been caused by a reduction in the number of capital projects being undertaken and, perhaps more importantly, by a sharp fall in the costs of the projects still undertaken. Information released by Transparency International (TI) reports that, within the space of two or three years, in the city of Milan, the city where the scandal broke out in the first place, the cost of city rail links fell by 52 percent, the cost of one kilometer of subway fell by 57 percent, and the budget for the new airport terminal was reduced by 59 percent to reflect the lower construction costs. Although one must be aware of the logical fallacy of post hoc, ergo propter hoc, the connection between the two events is too strong to be attributed to a coincidence. In fact, this paper takes the view that it could not have been a coincidence.", "title": "" }, { "docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b", "text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.", "title": "" }, { "docid": "48aea9478d2a9f1edb108202bd65e8dd", "text": "The popularity of mobile devices and location-based services (LBSs) has raised significant concerns regarding the location privacy of their users. A popular approach to protect location privacy is anonymizing the users of LBS systems. In this paper, we introduce an information-theoretic notion for location privacy, which we call perfect location privacy. We then demonstrate how anonymization should be used by LBS systems to achieve the defined perfect location privacy. We study perfect location privacy under two models for user movements. First, we assume that a user’s current location is independent from her past locations. Using this independent identically distributed (i.i.d.) model, we show that if the pseudonym of the user is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{r-1}}}\\right)$ </tex-math></inline-formula> observations are made by the adversary for that user, then the user has perfect location privacy. Here, <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> is the number of the users in the network and <inline-formula> <tex-math notation=\"LaTeX\">$r$ </tex-math></inline-formula> is the number of all possible locations. Next, we model users’ movements using Markov chains to better model real-world movement patterns. We show that perfect location privacy is achievable for a user if the user’s pseudonym is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{|E|-r}}}\\right)$ </tex-math></inline-formula> observations are collected by the adversary for that user, where <inline-formula> <tex-math notation=\"LaTeX\">$|E|$ </tex-math></inline-formula> is the number of edges in the user’s Markov chain model.", "title": "" }, { "docid": "0d6d2413cbaaef5354cf2bcfc06115df", "text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.", "title": "" }, { "docid": "6572c7d33fcb3f1930a41b4b15635ffe", "text": "Neurons in area MT (V5) are selective for the direction of visual motion. In addition, many are selective for the motion of complex patterns independent of the orientation of their components, a behavior not seen in earlier visual areas. We show that the responses of MT cells can be captured by a linear-nonlinear model that operates not on the visual stimulus, but on the afferent responses of a population of nonlinear V1 cells. We fit this cascade model to responses of individual MT neurons and show that it robustly predicts the separately measured responses to gratings and plaids. The model captures the full range of pattern motion selectivity found in MT. Cells that signal pattern motion are distinguished by having convergent excitatory input from V1 cells with a wide range of preferred directions, strong motion opponent suppression and a tuned normalization that may reflect suppressive input from the surround of V1 cells.", "title": "" }, { "docid": "38190bd8f531a7e165a3d786b4bd900c", "text": "We define a second-order neural network stochastic gradient training algorithm whose block-diagonal structure effectively amounts to normalizing the unit activations. Investigating why this algorithm lacks in robustness then reveals two interesting insights. The first insight suggests a new way to scale the stepsizes, clarifying popular algorithms such as RMSProp as well as old neural network tricks such as fanin stepsize scaling. The second insight stresses the practical importance of dealing with fast changes of the curvature of the cost.", "title": "" }, { "docid": "0a5ae1eb45404d6a42678e955c23116c", "text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.", "title": "" }, { "docid": "60fe0b363310d7407a705e3c1037aa15", "text": "AIMS\nThe aim was to investigate the biosorption of chromium, nickel and iron from metallurgical effluents, produced by a steel foundry, using a strain of Aspergillus terreus immobilized in polyurethane foam.\n\n\nMETHODS AND RESULTS\nA. terreus UFMG-F01 was immobilized in polyurethane foam and subjected to biosorption tests with metallurgical effluents. Maximal metal uptake values of 164.5 mg g(-1) iron, 96.5 mg g(-1) chromium and 19.6 mg g(-1) nickel were attained in a culture medium containing 100% of effluent stream supplemented with 1% of glucose, after 6 d of incubation.\n\n\nCONCLUSIONS\nMicrobial populations in metal-polluted environments include fungi that have adapted to otherwise toxic concentrations of heavy metals and have become metal resistant. In this work, a strain of A. terreus was successfully used as a metal biosorbent for the treatment of metallurgical effluents.\n\n\nSIGNIFICANCE AND IMPACT OF THE STUDY\nA. terreus UFMG-F01 was shown to have good biosorption properties with respect to heavy metals. The low cost and simplicity of this technique make its use ideal for the treatment of effluents from steel foundries.", "title": "" }, { "docid": "c68cfa9402dcc2a79e7ab2a7499cc683", "text": "Stereo-pair images obtained from two cameras can be used to compute three-dimensional (3D) world coordinates of a point using triangulation. However, to apply this method, camera calibration parameters for each camera need to be experimentally obtained. Camera calibration is a rigorous experimental procedure in which typically 12 parameters are to be evaluated for each camera. The general camera model is often such that the system becomes nonlinear and requires good initial estimates to converge to a solution. We propose that, for stereo vision applications in which real-world coordinates are to be evaluated, arti® cial neural networks be used to train the system such that the need for camera calibration is eliminated. The training set for our neural network consists of a variety of stereo-pair images and corresponding 3D world coordinates. We present the results obtained on our prototype mobile robot that employs two cameras as its sole sensors and navigates through simple regular obstacles in a high-contrast environment. We observe that the percentage errors obtained from our set-up are comparable with those obtained through standard camera calibration techniques and that the system is accurate enough for most machine-vision applications.", "title": "" }, { "docid": "1c6114188e01fb6c06c2ecdb1ced1565", "text": "Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.", "title": "" }, { "docid": "f53d13eeccff0048fc96e532a52a2154", "text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.", "title": "" }, { "docid": "6dcb885d26ca419925a094ade17a4cf7", "text": "This paper presents two different Ku-Band Low-Profile antenna concepts for Mobile Satellite Communications. The antennas are based on low-cost hybrid mechanical-electronic steerable solutions but, while the first one allows a broadband reception of a satellite signal (Receive-only antenna concept), the second one provides transmit and receive functions for a bi-directional communication link between the satellite and the mobile user terminal (Transmit-Receive antenna). Both examples are suitable for integration in land vehicles and aircrafts.", "title": "" }, { "docid": "601488a8e576d465a0bddd65a937c5c8", "text": "Human activity recognition is an area of growing interest facilitated by the current revolution in body-worn sensors. Activity recognition allows applications to construct activity profiles for each subject which could be used effectively for healthcare and safety applications. Automated human activity recognition systems face several challenges such as number of sensors, sensor precision, gait style differences, and others. This work proposes a machine learning system to automatically recognise human activities based on a single body-worn accelerometer. The in-house collected dataset contains 3D acceleration of 50 subjects performing 10 different activities. The dataset was produced to ensure robustness and prevent subject-biased results. The feature vector is derived from simple statistical features. The proposed method benefits from RGB-to-YIQ colour space transform as kernel to transform the feature vector into more discriminable features. The classification technique is based on an adaptive boosting ensemble classifier. The proposed system shows consistent classification performance up to 95% accuracy among the 50 subjects.", "title": "" }, { "docid": "b91cf13547266547b14e5520e3a12749", "text": "The objective of this article is to review radio frequency identification (RFID) technology, its developments on RFID transponders, design and operating principles, so that end users can benefit from knowing which transponder meets their requirements. In this article, RFID system definition, RFID transponder architecture and RFID transponder classification based on a comprehensive literature review on the field of research are presented. Detailed descriptions of these tags are also presented, as well as an in-house developed semiactive tag in a compact package.", "title": "" } ]
scidocsrr
c7fe6f0d3ce5d6f4407df003df4ad95d
Deep Learning for Image Denoising: A Survey
[ { "docid": "321abc49830c6d8c062087150f00532f", "text": "In this paper, we propose an approach to learn hierarchical features for visual object tracking. First, we offline learn features robust to diverse motion patterns from auxiliary video sequences. The hierarchical features are learned via a two-layer convolutional neural network. Embedding the temporal slowness constraint in the stacked architecture makes the learned features robust to complicated motion transformations, which is important for visual object tracking. Then, given a target video sequence, we propose a domain adaptation module to online adapt the pre-learned features according to the specific target object. The adaptation is conducted in both layers of the deep feature learning module so as to include appearance information of the specific target object. As a result, the learned hierarchical features can be robust to both complicated motion transformations and appearance changes of target objects. We integrate our feature learning algorithm into three tracking methods. Experimental results demonstrate that significant improvement can be achieved using our learned hierarchical features, especially on video sequences with complicated motion transformations.", "title": "" }, { "docid": "7926ab6b5cd5837a9b3f59f8a1b3f5ac", "text": "Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states/layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet.", "title": "" }, { "docid": "b5453d9e4385d5a5ff77997ad7e3f4f0", "text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.", "title": "" } ]
[ { "docid": "83d330486c50fe2ae1d6960a4933f546", "text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.", "title": "" }, { "docid": "fc9061348b46fc1bf7039fa5efcbcea1", "text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.", "title": "" }, { "docid": "fb80c27ab2615373a316605082adadbb", "text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.", "title": "" }, { "docid": "c6a23113b0e88c884eaddfba9cce2667", "text": "Recent research in machine learning has focused on breaking audio spectrograms into separate sources of sound using latent variable decompositions. These methods require that the number of sources be specified in advance, which is not always possible. To address this problem, we develop Gamma Process Nonnegative Matrix Factorization (GaP-NMF), a Bayesian nonparametric approach to decomposing spectrograms. The assumptions behind GaP-NMF are based on research in signal processing regarding the expected distributions of spectrogram data, and GaP-NMF automatically discovers the number of latent sources. We derive a mean-field variational inference algorithm and evaluate GaP-NMF on both synthetic data and recorded music.", "title": "" }, { "docid": "fac1eebdae6719224a6bd01785c72551", "text": "Tolerance design has become a very sensitive and important issue in product and process development because of increasing demand for quality products and the growing requirements for automation in manufacturing. This chapter presents tolerance stack up analysis of dimensional and geometrical tolerances. The stack up of tolerances is important for functionality of the mechanical assembly as well as optimizing the cost of the system. Many industries are aware of the importance of geometrical dimensioning & Tolerancing (GDT) of their product design. Conventional methods of tolerance stack up analysis are tedious and time consuming. Stack up of geometrical tolerances is usually difficult as it involves application of numerous rules & conditions. This chapter introduces the various approaches viz. Generic Capsule, Quickie and Catena methods, used towards tolerance stack up analysis for geometrical tolerances. Automation of stack up of geometrical tolerances can be used for tolerance allocation on the components as well as their assemblies considering the functionality of the system. Stack of geometrical tolerances has been performed for individual components as well as assembly of these components.", "title": "" }, { "docid": "d299f1ff3249a68b582494713e02a6bd", "text": "We consider the Vehicle Routing Problem, in which a fixed fleet of delivery vehicles of uniform capacity must service known customer demands for a single commodity from a common depot at minimum transit cost. This difficult combinatorial problem contains both the Bin Packing Problem and the Traveling Salesman Problem (TSP) as special cases and conceptually lies at the intersection of these two well-studied problems. The capacity constraints of the integer programming formulation of this routing model provide the link between the underlying routing and packing structures. We describe a decomposition-based separation methodology for the capacity constraints that takes advantage of our ability to solve small instances of the TSP efficiently. Specifically, when standard procedures fail to separate a candidate point, we attempt to decompose it into a convex combination of TSP tours; if successful, the tours present in this decomposition are examined for violated capacity constraints; if not, the Farkas Theorem provides a hyperplane separating the point from the TSP polytope. We present some extensions of this basic concept and a general framework within which it can be applied to other combinatorial models. Computational results are given for an implementation within the parallel branch, cut, and price framework SYMPHONY.", "title": "" }, { "docid": "368c91e483429b54989efea3a80fb370", "text": "A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators. Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety & economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.", "title": "" }, { "docid": "7401b3a6801b5c1349d961434ca69a3d", "text": "developed out of a need to solve a problem. The problem was posed, in the late 1960s, to the Optical Sciences Center (OSC) at the University of Arizona by the US Air Force. They wanted to improve the images of satellites taken from earth. The earth's atmosphere limits the image quality and exposure time of stars and satellites taken with telescopes over 5 inches in diameter at low altitudes and 10 to 12 inches in diameter at high altitudes. Dr. Aden Mienel was director of the OSC at that time. He came up with the idea of enhancing images of satellites by measuring the Optical Transfer Function (OTF) of the atmosphere and dividing the OTF of the image by the OTF of the atmosphere. The trick was to measure the OTF of the atmosphere at the same time the image was taken and to control the exposure time so as to capture a snapshot of the atmospheric aberrations rather than to average over time. The measured wavefront error in the atmosphere should not change more than ␭/10 over the exposure time. The exposure time for a low earth orbit satellite imaged from a mountaintop was determined to be about 1/60 second. Mienel was an astronomer and had used the standard Hartmann test (Fig 1), where large wooden or cardboard panels were placed over the aperture of a large telescope. The panels had an array of holes that would allow pencils of rays from stars to be traced through the telescope system. A photographic plate was placed inside and outside of focus, with a sufficient separation, so the pencil of rays would be separated from each other. Each hole in the panel would produce its own blurry image of the star. By taking two images a known distance apart and measuring the centroid of the images, one can trace the rays through the focal plane. Hartmann used these ray traces to calculate figures of merit for large telescopes. The data can also be used to make ray intercept curves (H'-tan U'). When Mienel could not cover the aperture while taking an image of the satellite, he came up with the idea of inserting a beam splitter in collimated space behind the eyepiece and placing a plate with holes in it at the image of the pupil. Each hole would pass a pencil of rays to a vidicon tube (this was before …", "title": "" }, { "docid": "1e7c1dfe168aec2353b31613811112ae", "text": "A great video title describes the most salient event compactly and captures the viewer’s attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset.", "title": "" }, { "docid": "84d9b3e6e4b09515591fb20896b4fa43", "text": "This paper describes the design and fabrication of low-cost coplanar waveguide (CPW) miniature meander inductors. Inductors are fabricated on a flexible plastic polyimide foil in ink-jet printed technology with silver nanoparticle ink in a single layer. For the first time, the detailed characterization and simulation of CPW inductors in this technology is reported. The inductors are developed with impressive measured self-resonance frequency up to 18.6 GHz. The 2.107-nH inductor measures only 1 mm × 1.7 mm × 0.075 mm and demonstrates a high level of miniaturization in ink-jet printing technology. The measured response characteristics are in excellent agreement with the predicted simulation response.", "title": "" }, { "docid": "afbded5d6624b0b36e5072e3b16175b6", "text": "The authors propose a method for embedding a multitone watermark using low computational complexity. The proposed approach can guard against reasonable cropping or print-and-scan attacks.", "title": "" }, { "docid": "f4ebbcebefbcc1ba8b6f8e5bf6096645", "text": "With advances in wireless communication technology, more and more people depend heavily on portable mobile devices for businesses, entertainments and social interactions. Although such portable mobile devices can offer various promising applications, their computing resources remain limited due to their portable size. This however can be overcome by remotely executing computation-intensive tasks on clusters of near by computers known as cloudlets. As increasing numbers of people access the Internet via mobile devices, it is reasonable to envision in the near future that cloudlet services will be available for the public through easily accessible public wireless metropolitan area networks (WMANs). However, the outdated notion of treating cloudlets as isolated data-centers-in-a-box must be discarded as there are clear benefits to connecting multiple cloudlets together to form a network. In this paper we investigate how to balance the workload between multiple cloudlets in a network to optimize mobile application performance. We first introduce a system model to capture the response times of offloaded tasks, and formulate a novel optimization problem, that is to find an optimal redirection of tasks between cloudlets such that the maximum of the average response times of tasks at cloudlets is minimized. We then propose a fast, scalable algorithm for the problem. We finally evaluate the performance of the proposed algorithm through experimental simulations. The experimental results demonstrate the significant potential of the proposed algorithm in reducing the response times of tasks.", "title": "" }, { "docid": "c8d5ca95f6cd66461729cfc03772f5d0", "text": "Statistical relationalmodels combine aspects of first-order logic andprobabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in themodel, and reasoning about groups of objects as awhole, lifted algorithms dramatically improve the run time of inference and learning. The thesis has five main contributions. First, we propose a new method for logical inference, called first-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for", "title": "" }, { "docid": "f3e6330844e73edfd3f9c79c8ceaefc8", "text": "BACKGROUND\nA number of surface scanning systems with the ability to quickly and easily obtain 3D digital representations of the foot are now commercially available. This review aims to present a summary of the reported use of these technologies in footwear development, the design of customised orthotics, and investigations for other ergonomic purposes related to the foot.\n\n\nMETHODS\nThe PubMed and ScienceDirect databases were searched. Reference lists and experts in the field were also consulted to identify additional articles. Studies in English which had 3D surface scanning of the foot as an integral element of their protocol were included in the review.\n\n\nRESULTS\nThirty-eight articles meeting the search criteria were included. Advantages and disadvantages of using 3D surface scanning systems are highlighted. A meta-analysis of studies using scanners to investigate the changes in foot dimensions during varying levels of weight bearing was carried out.\n\n\nCONCLUSIONS\nModern 3D surface scanning systems can obtain accurate and repeatable digital representations of the foot shape and have been successfully used in medical, ergonomic and footwear development applications. The increasing affordability of these systems presents opportunities for researchers investigating the foot and for manufacturers of foot related apparel and devices, particularly those interested in producing items that are customised to the individual. Suggestions are made for future areas of research and for the standardization of the protocols used to produce foot scans.", "title": "" }, { "docid": "61f6fe08fd7c78f066438b6202dbe843", "text": "State-of-charge (SOC) measures energy left in a battery, and it is critical for modeling and managing batteries. Developing efficient yet accurate SOC algorithms remains a challenging task. Most existing work uses regression based on a time-variant circuit model, which may be hard to converge and often does not apply to different types of batteries. Knowing open-circuit voltage (OCV) leads to SOC due to the well known mapping between OCV and SOC. In this paper, we propose an efficient yet accurate OCV algorithm that applies to all types of batteries. Using linear system analysis but without a circuit model, we calculate OCV based on the sampled terminal voltage and discharge current of the battery. Experiments show that our algorithm is numerically stable, robust to history dependent error, and obtains SOC with less than 4% error compared to a detailed battery simulation for a variety of batteries. Our OCV algorithm is also efficient, and can be used as a real-time electro-analytical tool revealing what is going on inside the battery.", "title": "" }, { "docid": "a5e52fc842c9b1780282efc071d87b0e", "text": "The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points and concepts are represented by regions in a (potentially) high-dimensional space. Based on our recent formalization, we present a comprehensive implementation of the conceptual spaces framework that is not only capable of representing concepts with inter-domain correlations, but that also offers a variety of operations on these concepts.", "title": "" }, { "docid": "ee8a708913949db5dbdc43bea60fce37", "text": "Sign language is the native language of deaf and hearing impaired people which they prefer to use on their daily life. Few interpreters are available to facilitate communication between deaf and vocal people. However, this is neither practical nor possible for all situations. Advances in information technology encouraged the development of systems that can facilitate the automatic translation between sign language and spoken language, and thus removing barriers facing the integration of deaf people in the society. A lot of research has been carried on the development of systems that translate sign languages into spoken words and the reverse. However, only recently systems translating between Arabic sign language and spoken language have been developed. Many signs of the Arabic sign language are reflection of the environment (White color in Arabic sign language is a finger pointing to the chest of the signer as the tradition for male is to wear white color dress). Several review papers have been published on the automatic recognition of other sign languages. This paper represents the first attempt to review systems and methods for the image based automatic recognition of the Arabic sign language. It reviews most published papers and discusses a variety of recognition methods. Additionally, the paper highlights the main challenges characterizing the Arabic sign language as well as potential future research directions in this area.", "title": "" }, { "docid": "7228073bef61131c2efcdc736d90ca1b", "text": "With the advent of word representations, word similarity tasks are becoming increasing popular as an evaluation metric for the quality of the representations. In this paper, we present manually annotated monolingual word similarity datasets of six Indian languages – Urdu, Telugu, Marathi, Punjabi, Tamil and Gujarati. These languages are most spoken Indian languages worldwide after Hindi and Bengali. For the construction of these datasets, our approach relies on translation and re-annotation of word similarity datasets of English. We also present baseline scores for word representation models using state-of-the-art techniques for Urdu, Telugu and Marathi by evaluating them on newly created word similarity datasets.", "title": "" }, { "docid": "eea45eb670d380e722f3148479a0864d", "text": "In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs).", "title": "" }, { "docid": "40229eb3a95ec25c1c3247edbcc22540", "text": "The aim of this paper is the identification of a superordinate research framework for describing emerging IT-infrastructures within manufacturing, logistics and Supply Chain Management. This is in line with the thoughts and concepts of the Internet of Things (IoT), as well as with accompanying developments, namely the Internet of Services (IoS), Mobile Computing (MC), Big Data Analytics (BD) and Digital Social Networks (DSN). Furthermore, Cyber-Physical Systems (CPS) and their enabling technologies as a fundamental component of all these research streams receive particular attention. Besides of the development of an eponymous research framework, relevant applications against the background of the technological trends as well as potential areas of interest for future research, both raised from the economic practice's perspective, are identified.", "title": "" } ]
scidocsrr
9949b673c84b955c4039d71dfc4ad3ac
Streaming trend detection in Twitter
[ { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "8732cabe1c2dc0e8587b1a7e03039ef0", "text": "With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. \n In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies <i>event threading</i>. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\n We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.", "title": "" } ]
[ { "docid": "da1cecae4f925f331fda67c784e6635d", "text": "This paper surveys recent literature on vehicular social networks that are a particular class of vehicular ad hoc networks, characterized by social aspects and features. Starting from this pillar, we investigate perspectives on next-generation vehicles under the assumption of social networking for vehicular applications (i.e., safety and entertainment applications). This paper plays a role as a starting point about socially inspired vehicles and mainly related applications, as well as communication techniques. Vehicular communications can be considered the “first social network for automobiles” since each driver can share data with other neighbors. For instance, heavy traffic is a common occurrence in some areas on the roads (e.g., at intersections, taxi loading/unloading areas, and so on); as a consequence, roads become a popular social place for vehicles to connect to each other. Human factors are then involved in vehicular ad hoc networks, not only due to the safety-related applications but also for entertainment purposes. Social characteristics and human behavior largely impact on vehicular ad hoc networks, and this arises to the vehicular social networks, which are formed when vehicles (individuals) “socialize” and share common interests. In this paper, we provide a survey on main features of vehicular social networks, from novel emerging technologies to social aspects used for mobile applications, as well as main issues and challenges. Vehicular social networks are described as decentralized opportunistic communication networks formed among vehicles. They exploit mobility aspects, and basics of traditional social networks, in order to create novel approaches of message exchange through the detection of dynamic social structures. An overview of the main state-of-the-art on safety and entertainment applications relying on social networking solutions is also provided.", "title": "" }, { "docid": "a15275cc08ad7140e6dd0039e301dfce", "text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.", "title": "" }, { "docid": "006ea5f44521c42ec513edc1cbff1c43", "text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "0e30a01870bbbf32482b5ac346607afc", "text": "Hypothyroidism is the pathological condition in which the level of thyroid hormones declines to the deficiency state. This communication address the therapies employed for the management of hypothyroidism as per the Ayurvedic and modern therapeutic perspectives on the basis scientific papers collected from accepted scientific basis like Google, Google Scholar, PubMed, Science Direct, using various keywords. The Ayurveda describe hypothyroidism as the state of imbalance of Tridoshas and suggest the treatment via use of herbal plant extracts, life style modifications like practicing yoga and various dietary supplements. The modern medicine practice define hypothyroidism as the disease state originated due to formation of antibodies against thyroid gland and hormonal imbalance and incorporate the use of hormone replacement i.e. Levothyroxine, antioxidants. Various plants like Crataeva nurvula and dietary supplements like Capsaicin, Forskolin, Echinacea, Ginseng and Bladderwrack can serve as a potential area of research as thyrotropic agents.", "title": "" }, { "docid": "545064c02ed0ca14c53b3d083ff84eac", "text": "We present a novel polarization imaging sensor by monolithically integrating aluminum nanowire optical filters with an array of CCD imaging elements. The CCD polarization image sensor is composed of 1000 by 1000 imaging elements with 7.4m pixel pitch. The image sensor has a dynamic range of 65dB and signal-to-noise ratio of 45dB. The CCD array is covered with an array of pixel-pitch matched nanowire polarization filters with four different orientations offset by 45. The complete imaging sensor is used for real-time reconstruction of the shape of various objects.", "title": "" }, { "docid": "07905317dcdbcf1332fd57ffaa02f8d3", "text": "Motivation\nIdentifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters.\n\n\nResults\nHere, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs.\n\n\nAvailability and Implementation\nThe network specifications and solver definitions are provided in Supplementary Software 1.\n\n\nContact\nwilliam_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95", "text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.", "title": "" }, { "docid": "ea0b23e9c37fa35da9ff6d9091bbee5e", "text": "Since the invention of the wheel, Man has sought to reduce effort to get things done easily. Ultimately, it has resulted in the invention of the Robot, an Engineering Marvel. Up until now, the biggest factor that hampers wide proliferation of robots is locomotion and maneuverability. They are not dynamic enough to conform even to the most commonplace terrain such as stairs. To overcome this, we are proposing a stair climbing robot that looks a lot like the human leg and can adjust itself according to the height of the step. But, we are currently developing a unit to carry payload of about 4 Kg. The automatic adjustment in the robot according to the height of the stair is done by connecting an Android device that has an application programmed in OpenCV with an Arduino in Host mode. The Android Device uses it camera to calculate the height of the stair and sends it to the Arduino for further calculation. This design employs an Arduino Mega ADK 2560 board to control the robot and other home fabricated custom PCB to interface it with the Arduino Board. The bot is powered by Li-Ion batteries and Servo motors.", "title": "" }, { "docid": "9a3a73f35b27d751f237365cc34c8b28", "text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.", "title": "" }, { "docid": "4721173eea1997316b8c9eca8b4a8d05", "text": "Conventional centralized cloud computing is a success for benefits such as on-demand, elasticity, and high colocation of data and computation. However, the paradigm shift towards “Internet of things” (IoT) will pose some unavoidable challenges: (1) massive data volume impossible for centralized datacenters to handle; (2) high latency between edge “things” and centralized datacenters; (3) monopoly, inhibition of innovations, and non-portable applications due to the proprietary application delivery in centralized cloud. The emergence of edge cloud gives hope to address these challenges. In this paper, we propose a new framework called “HomeCloud” focusing on an open and efficient new application delivery in edge cloud integrating two complementary technologies: Network Function Virtualization (NFV) and Software-Defined Networking (SDN). We also present a preliminary proof-of-concept testbed demonstrating the whole process of delivering a simple multi-party chatting application in the edge cloud. In the future, the HomeCloud framework can be further extended to support other use cases that demand portability, cost-efficiency, scalability, flexibility, and manageability. To the best of our knowledge, this framework is the first effort aiming at facilitating new application delivery in such a new edge cloud context.", "title": "" }, { "docid": "e630891703d4a4e6e65fea11698f24c7", "text": "In spite of meticulous planning, well documentation and proper process control during software development, occurrences of certain defects are inevitable. These software defects may lead to degradation of the quality which might be the underlying cause of failure. In today‟s cutting edge competition it‟s necessary to make conscious efforts to control and minimize defects in software engineering. However, these efforts cost money, time and resources. This paper identifies causative factors which in turn suggest the remedies to improve software quality and productivity. The paper also showcases on how the various defect prediction models are implemented resulting in reduced magnitude of defects.", "title": "" }, { "docid": "c5ecfcebbbd577a0bc14ccb4613a98ac", "text": "When Jean-Dominique Bauby suffered from a cortico-subcortical stroke that led to complete paralysis with totally intact sensory and cognitive functions, he described his experience in The Diving-Bell and the Butterfly as “something like a giant invisible diving-bell holds my whole body prisoner”. This horrifying condition also occurs as a consequence of a progressive neurological disease, amyotrophic lateral sclerosis, which involves progressive degeneration of all the motor neurons of the somatic motor system. These ‘locked-in’ patients ultimately become unable to express themselves and to communicate even their most basic wishes or desires, as they can no longer control their muscles to activate communication devices. We have developed a new means of communication for the completely paralysed that uses slow cortical potentials (SCPs) of the electro-encephalogram to drive an electronic spelling device.", "title": "" }, { "docid": "9fdecc8854f539ddf7061c304616130b", "text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.", "title": "" }, { "docid": "5b507508fd3b3808d61e822d2a91eab9", "text": "In this brief, we propose a stand-alone system-on-a-programmable-chip (SOPC)-based cloud system to accelerate massive electrocardiogram (ECG) data analysis. The proposed system tightly couples network I/O handling hardware to data processing pipelines in a single field-programmable gate array (FPGA), offloading both networking operations and ECG data analysis. In this system, we first propose a massive-sessions optimized TCP/IP hardware stack using a macropipeline architecture to accelerate network packet processing. Second, we propose a streaming architecture to accelerate ECG signal processing, including QRS detection, feature extraction, and classification. We verify our design on XC6VLX550T FPGA using real ECG data. Compared to commercial servers, our system shows up to 38× improvement in performance and 142× improvement in energy efficiency.", "title": "" }, { "docid": "cf94d312bb426e64e364dfa33b09efeb", "text": "The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.", "title": "" }, { "docid": "986bd4907d512402a188759b5bdef513", "text": "► We consider a case of laparoscopic aortic lymphadenectomy for an early ovarian cancer including a comprehensive surgical staging. ► The patient was found to have a congenital anatomic abnormality: a right renal malrotation with an accessory renal artery. ► We used a preoperative CT angiography study to diagnose such anatomical variations and to adequate the proper surgical technique.", "title": "" } ]
scidocsrr
440d2e15509653eb7dc3bbf4f0137b10
Attending to All Mention Pairs for Full Abstract Biological Relation Extraction
[ { "docid": "a5b7253f56a487552ba3b0ce15332dd1", "text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.", "title": "" } ]
[ { "docid": "d48053467e72a6a550de8cb66b005475", "text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.", "title": "" }, { "docid": "45b303fb40f120f87dd618855fa21871", "text": "The relationship between business and society has witnessed a dramatic change in the past few years. Globalization, ethical consumerism, environmental concerns, strict government regulations, and growing strength of the civil society, are all factors that forced businesses to reconsider their role in society; accordingly there has been a surge of notions that tries to explain this new complex relation between business and society. This paper aims at accentuating this evolving relation by focusing on the concept of corporate social responsibility (CSR). It differentiates between CSR and other related concepts such as business ethics and corporate philanthropy. It analyzes the different arguments in the CSR debate, pinpoints mechanisms adopted by businesses in carrying out their social responsibilities, and concludes with the link between corporate social responsibility and sustainable development.", "title": "" }, { "docid": "d8828a6cafcd918cd55b1782629b80e0", "text": "For deep-neural-network (DNN) processors [1-4], the product-sum (PS) operation predominates the computational workload for both convolution (CNVL) and fully-connect (FCNL) neural-network (NN) layers. This hinders the adoption of DNN processors to on the edge artificial-intelligence (AI) devices, which require low-power, low-cost and fast inference. Binary DNNs [5-6] are used to reduce computation and hardware costs for AI edge devices; however, a memory bottleneck still remains. In Fig. 31.5.1 conventional PE arrays exploit parallelized computation, but suffer from inefficient single-row SRAM access to weights and intermediate data. Computing-in-memory (CIM) improves efficiency by enabling parallel computing, reducing memory accesses, and suppressing intermediate data. Nonetheless, three critical challenges remain (Fig. 31.5.2), particularly for FCNL. We overcome these problems by co-optimizing the circuits and the system. Recently, researches have been focusing on XNOR based binary-DNN structures [6]. Although they achieve a slightly higher accuracy, than other binary structures, they require a significant hardware cost (i.e. 8T-12T SRAM) to implement a CIM system. To further reduce the hardware cost, by using 6T SRAM to implement a CIM system, we employ binary DNN with 0/1-neuron and ±1-weight that was proposed in [7]. We implemented a 65nm 4Kb algorithm-dependent CIM-SRAM unit-macro and in-house binary DNN structure (focusing on FCNL with a simplified PE array), for cost-aware DNN AI edge processors. This resulted in the first binary-based CIM-SRAM macro with the fastest (2.3ns) PS operation, and the highest energy-efficiency (55.8TOPS/W) among reported CIM macros [3-4].", "title": "" }, { "docid": "6001982cb50621fe488034d6475d1894", "text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.", "title": "" }, { "docid": "c926d9a6b6fe7654e8409ae855bdeb20", "text": "A low-power, 40-Gb/s optical transceiver front-end is demonstrated in a 45-nm silicon-on-insulator (SOI) CMOS process. Both single-ended and differential optical modulators are demonstrated with floating-body transistors to reach output swings of more than 2 VPP and 4 VPP, respectively. A single-ended gain of 7.6 dB is measured over 33 GHz. The optical receiver consists of a transimpedance amplifier (TIA) and post-amplifier with 55 dB ·Ω of transimpedance over 30 GHz. The group-delay variation is ±3.9 ps over the 3-dB bandwidth and the average input-referred noise density is 20.5 pA/(√Hz) . The TIA consumes 9 mW from a 1-V supply for a transimpedance figure of merit of 1875 Ω /pJ. This represents the lowest power consumption for a transmitter and receiver operating at 40 Gb/s in a CMOS process.", "title": "" }, { "docid": "09c19ae7eea50f269ee767ac6e67827b", "text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.", "title": "" }, { "docid": "4c1b42e12fd4f19870b5fc9e2f9a5f07", "text": "Similar to face-to-face communication in daily life, more and more evidence suggests that human emotions also spread in online social media through virtual interactions. However, the mechanism underlying the emotion contagion, like whether different feelings spread unlikely or how the spread is coupled with the social network, is rarely investigated. Indeed, due to the costly expense and spatio-temporal limitations, it is challenging for conventional questionnaires or controlled experiments. While given the instinct of collecting natural affective responses of massive connected individuals, online social media offer an ideal proxy to tackle this issue from the perspective of computational social science. In this paper, based on the analysis of millions of tweets in Weibo, a Twitter-like service in China, we surprisingly find that anger is more contagious than joy, indicating that it can sparkle more angry follow-up tweets; and anger prefers weaker ties than joy for the dissemination in social network, indicating that it can penetrate different communities and break local traps by more sharings between strangers. Through a simple diffusion model, it is unraveled that easier contagion and weaker ties function cooperatively in speeding up anger’s spread, which is further testified by the diffusion of realistic bursty events with different dominant emotions. To our best knowledge, for the first time we quantificationally provide the long-term evidence to disclose the difference between joy and anger in dissemination mechanism and our findings would shed lights on personal anger management in human communication and collective outrage control in cyber space.", "title": "" }, { "docid": "e8a1330f93a701939367bd390e9018c7", "text": "An eccentric paddle locomotion mechanism based on the epicyclic gear mechanism (ePaddle-EGM), which was proposed to enhance the mobility of amphibious robots in multiterrain tasks, can perform various terrestrial and aquatic gaits. Two of the feasible aquatic gaits are the rotational paddling gait and the oscillating paddling gait. The former one has been studied in our previous work, and a capacity of generating vectored thrust has been found. In this letter, we focus on the oscillating paddling gait by measuring the generated thrusts of the gait on an ePaddle-EGM prototype module. Experimental results verify that the oscillating paddling gait can generate vectored thrust by changing the location of the paddle shaft as well. Furthermore, we compare the oscillating paddling gait with the rotational paddling gait at the vectored thrusting property, magnitude of the thrust, and the gait efficiency.", "title": "" }, { "docid": "e8a9dffcb6c061fe720e7536387f5116", "text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.", "title": "" }, { "docid": "be9cea5823779bf5ced592f108816554", "text": "Undoubtedly, bioinformatics is one of the fastest developing scientific disciplines in recent years. Bioinformatics is the development and application of computer methods for management, analysis, interpretation, and prediction, as well as for the design of experiments. There is already a significant number of books on bioinformatics. Some are introductory and require almost no prior experience in biology or computer science: “Bioinformatics Basics Applications in Biological Science and Medicine” and “Introduction to Bioinformatics.” Others are targeted to biologists entering the field of bioinformatics: “Developing Bioinformatics Computer Skills.” Some more specialized books are: “An Introduction to Support Vector Machines : And Other Kernel-Based Learning Methods”, “Biological Sequence Analysis : Probabilistic Models of Proteins and Nucleic Acids”, “Pattern Discovery in Bimolecular Data : Tools, Techniques, and Applications”, “Computational Molecular Biology: An Algorithmic Approach.” The book subject of this review has a broad scope. “Bioinformatics: The machine learning approach” is aimed at two types of researchers and students. First are the biologists and biochemists who need to understand new data-driven algorithms, such as neural networks and hidden Markov", "title": "" }, { "docid": "83651ca357b0f978400de4184be96443", "text": "The most common temporomandibular joint (TMJ) pathologic disease is anterior-medial displacement of the articular disk, which can lead to TMJ-related symptoms.The indication for disk repositioning surgery is irreversible TMJ damage associated with temporomandibular pain. We describe a surgical technique using a preauricular approach with a high condylectomy to reshape the condylar head. The disk is anchored with a bioabsorbable microanchor (Mitek Microfix QuickAnchor Plus 1.3) to the lateral aspect of the condylar head. The anchor is linked with a 3.0 Ethibond absorbable suture to fix the posterolateral side of the disk above the condyle.The aims of this surgery were to alleviate temporomandibular pain, headaches, and neck pain and to restore good jaw mobility. In the long term, we achieved these objectives through restoration of the physiological position and function of the disk and the lower articular compartment.In our opinion, the bioabsorbable anchor is the best choice for this type of surgery because it ensures the stability of the restored disk position and leaves no artifacts in the long term that might impede follow-up with magnetic resonance imaging.", "title": "" }, { "docid": "ad09fcab0aac68007eac167cafdd3d3c", "text": "We present HARP, a novel method for learning low dimensional embeddings of a graph’s nodes which preserves higherorder structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the stateof-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP’s hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on classification tasks on real-world graphs such as DBLP, BlogCatalog, and CiteSeer, where we achieve a performance gain over the original implementations by up to 14% Macro F1.", "title": "" }, { "docid": "57edf07b135a073e5e780eabd0fd2bf8", "text": "Boolean tensor decomposition approximates data of multi-way binary relationships as product of interpretable low-rank binary factors, following the rules of Boolean algebra. Here, we present its first probabilistic treatment. We facilitate scalable sampling-based posterior inference by exploitation of the combinatorial structure of the factor conditionals. Maximum a posteriori decompositions feature higher accuracies than existing techniques throughout a wide range of simulated conditions. Moreover, the probabilistic approach facilitates the treatment of missing data and enables model selection with much greater accuracy. We investigate three real-world data-sets. First, temporal interaction networks in a hospital ward and behavioural data of university students demonstrate the inference of instructive latent patterns. Next, we decompose a tensor with more than 10 billion data points, indicating relations of gene expression in cancer patients. Not only does this demonstrate scalability, it also provides an entirely novel perspective on relational properties of continuous data and, in the present example, on the molecular heterogeneity of cancer. Our implementation is available on GitHub2.", "title": "" }, { "docid": "47b8daaaa43535ec29461f0d1b86566d", "text": "This article aims to improve nurses' knowledge of wound debridement through a review of different techniques and the related physiology of wound healing. Debridement has long been an established component of effective wound management. However, recent clinical developments have widened the choice of methods available. This article provides an overview of the physiology of wounds, wound bed preparation, methods of debridement and the important considerations for the practitioner in implementing effective, informed and patient-centred wound care.", "title": "" }, { "docid": "d55d212b64b76c94b1b93e39907ea06c", "text": "The machine learning community has recently shown a lot of interest in practical probabilistic programming systems that target the problem of Bayesian inference. Such systems come in different forms, but they all express probabilistic models as computational processes using syntax resembling programming languages. In the functional programming community monads are known to offer a convenient and elegant abstraction for programming with probability distributions, but their use is often limited to very simple inference problems. We show that it is possible to use the monad abstraction for constructing probabilistic models, while still offering good performance of inference in challenging models. We use a GADT as an underlying representation of a probability distribution and apply Sequential Monte Carlo-based methods to achieve efficient inference. We define a formal semantics via measure theory and check the monad laws. We demonstrate a clean and elegant implementation that achieves performance comparable with Anglican, a state-of-the-art probabilistic programming system.", "title": "" }, { "docid": "7d9b919720ad38107336fdf4c5977d4b", "text": "Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.", "title": "" }, { "docid": "1d0874e5fdb6635e07f08d59b113e57e", "text": "In this paper a convolutional neural network is applied to the problem of note onset detection in audio recordings. Two time-frequency representations are analysed, showing the superiority of standard spectrogram over enhanced autocorrelation (EAC) used as the input to the convolutional network. Experimental evaluation is based on a dataset containing 10,939 annotated onsets, with total duration of the audio recordings of over 45 min.", "title": "" }, { "docid": "80ea6a0b24c857c02ead9d10f3de0870", "text": "Phishing is an attempt to acquire one's information without user's knowledge by tricking him by making similar kind of website or sending emails to user which looks like legitimate site or email. Phishing is a social cyber threat attack, which is causing severe loss of economy to the user, due to phishing attacks online transaction users are declining. This paper aims to design and implement a new technique to detect phishing web sites using Google's PageRank. Google gives a PageRank value to each site in the web. This work uses the PageRank value and other features to classify phishing sites from normal sites. We have collected a dataset of 100 phishing sites and 100 legitimate sites for our use. By using this Google PageRank technique 98% of the sites are correctly classified, showing only 0.02 false positive rate and 0.02 false negative rate.", "title": "" }, { "docid": "59f0aead21fc5e0619893d5b5e161ebc", "text": "The use of plastic materials in agriculture causes serious hazards to the environment. The introduction of biodegradable materials, which can be disposed directly into the soil can be one possible solution to this problem. In the present research results of experimental tests carried out on biodegradable film fabricated from natural waste (corn husk) are presented. The film was characterized by Fourier transform infrared spectroscopy (FTIR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA) and atomic force microscope (AFM) observation. The film is shown to be readily degraded within 7-9 months under controlled soil conditions, indicating a high biodegradability rate. The film fabricated was use to produce biodegradable pot (BioPot) for seedlings plantation. The introduction and the expanding use of biodegradable materials represent a really promising alternative for enhancing sustainable and environmentally friendly agricultural activities. Keywords—Environment, waste, plastic, biodegradable.", "title": "" }, { "docid": "5baa9d48708a9be8275cd7e45a02fc5e", "text": "The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.", "title": "" } ]
scidocsrr
6e90b4d6427dc3df690870f10108794d
RFID- based supply chain traceability system
[ { "docid": "259c17740acd554463731d3e1e2912eb", "text": "In recent years, radio frequency identification technology has moved from obscurity into mainstream applications that help speed the handling of manufactured goods and materials. RFID enables identification from a distance, and unlike earlier bar-code technology, it does so without requiring a line of sight. In this paper, the author introduces the principles of RFID, discusses its primary technologies and applications, and reviews the challenges organizations will face in deploying this technology.", "title": "" }, { "docid": "9c751a7f274827e3d8687ea520c6e9a9", "text": "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.", "title": "" } ]
[ { "docid": "dfc2a459de8400f22969477f28178bd5", "text": "The requirements of three-dimensional (3-D) road objects have increased for various applications, such as geographic information systems and intelligent transportation systems. The use of mobile lidar systems (MLSs) running along road corridors is an effective way to collect accurate road inventories, but MLS feature extraction is challenged by the blind scanning characteristics of lidar systems and the huge amount of data involved; therefore, an automatic process for MLS data is required to improve efficiency of feature extraction. This study developed a coarse-to-fine approach for the extraction of pole-like road objects from MLS data. The major work consists of data preprocessing, coarse-to-fine segmentation, and detection. In data preprocessing, points from different trajectories were reorganized into road parts, and building facades alongside road corridors were removed to reduce their influence. Then, a coarse-to-fine computational framework for the detection of pole-like objects that segments point clouds was proposed. The results show that the pole-like object detection rate for the proposed method was about 90%, and the proposed coarse-to-fine framework was more efficient than the single-scale framework. These results indicate that the proposed method can be used to effectively extract pole-like road objects from MLS data.", "title": "" }, { "docid": "66805d6819e3c4b5f7c71b7a851c7371", "text": "We consider classification of email messages as to whether or not they contain certain \"email acts\", such as a request or a commitment. We show that exploiting the sequential correlation among email messages in the same thread can improve email-act classification. More specifically, we describe a new text-classification algorithm based on a dependency-network based collective classification method, in which the local classifiers are maximum entropy models based on words and certain relational features. We show that statistically significant improvements over a bag-of-words baseline classifier can be obtained for some, but not all, email-act classes. Performance improvements obtained by collective classification appears to be consistent across many email acts suggested by prior speech-act theory.", "title": "" }, { "docid": "afaed9813ab63d0f5a23648a1e0efadb", "text": "We proposed novel airway segmentation methods in volumetric chest computed tomography (CT) using 2.5D convolutional neural net (CNN) and 3D CNN. A method with 2.5D CNN segments airways by voxel-by-voxel classification based on patches which are from three adjacent slices in each of the orthogonal directions including axial, sagittal, and coronal slices around each voxel, while 3D CNN segments by 3D patch-based semantic segmentation using modified 3D U-Net. The extra-validation of our proposed method was demonstrated in 20 test datasets of the EXACT’09 challenge. The detected tree length and the false positive rate was 60.1%, 4.56% for 2.5D CNN and 61.6%, 3.15% for 3D CNN. Our fully automated (end-to-end) segmentation method could be applied in radiological practice.", "title": "" }, { "docid": "cb71e8b2bb1eeaad91a2036a9d3828ac", "text": "This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewiselinear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to nonmanifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed. This work was supported by ARPA contract F19628-93-C-0171 and NSF Young Investigator award CCR-9357763. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number.", "title": "" }, { "docid": "2bd8a66a3e3cfafc9b13fd7ec47e86fc", "text": "Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes.", "title": "" }, { "docid": "f987c0af2814b3f7d75fc33c22530936", "text": "All I Really Need to Know I Learned in Kindergarten By Robert Fulghum (Fulghum 1988) Share everything. Play fair. Don’t hit people. Put things back where you found them. Clean up your own mess. Don’t take things that aren’t yours. Say you’re sorry when you hurt somebody. Wash your hands before you eat. Flush. Warm cookies and cold milk are good for you. Live a balanced life – learn some and think some and draw and paint and sing and dance and play and work every day some. Take a nap every afternoon. When you go out into the world, watch out for traffic, hold hands and stick together. Be aware of wonder. Introduction Pair programming is a style of programming in which two programmers work side-by-side at one computer, continuously collaborating on the same design, algorithm, code or test. As discussed below, use of this practice has been demonstrated to improve productivity and quality of software products. Additionally, based on a survey(Williams 1999) of pair programmers (hereafter referred to as “the pair programming survey\"), 100% agreed that they had more confidence in their solution when pair programming than when they program alone. Likewise, 96% agreed that they enjoy their job more than when programming alone.", "title": "" }, { "docid": "6a19410817766b052a2054b2cb3efe42", "text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.", "title": "" }, { "docid": "724845cb5c9f531e09f2c8c3e6f52fe4", "text": "Deep learning has given way to a new era of machine learning, apart from computer vision. Convolutional neural networks have been implemented in image classification, segmentation and object detection. Despite recent advancements, we are still in the very early stages and have yet to settle on best practices for network architecture in terms of deep design, small in size and a short training time. In this work, we propose a very deep neural network comprised of 16 Convolutional layers compressed with the Fire Module adapted from the SQUEEZENET model. We also call for the addition of residual connections to help suppress degradation. This model can be implemented on almost every neural network model with fully incorporated residual learning. This proposed model Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT Places365-Standard scene dataset. In our tests, the model performed with accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation accuracy while also enjoying a 23.86% reduction in training time and an 88.4% reduction in size. In our tests, this model was trained from scratch. Keywords— Convolutional Neural Networks; VGG16; Residual learning; Squeeze Neural Networks; Residual-Squeeze-VGG16; Scene Classification; ResSquVGG16.", "title": "" }, { "docid": "bbcd26c47892476092a779869be7040c", "text": "This article reviews the thyroid system, mainly from a mammalian standpoint. However, the thyroid system is highly conserved among vertebrate species, so the general information on thyroid hormone production and feedback through the hypothalamic-pituitary-thyroid (HPT) axis should be considered for all vertebrates, while species-specific differences are highlighted in the individual articles. This background article begins by outlining the HPT axis with its components and functions. For example, it describes the thyroid gland, its structure and development, how thyroid hormones are synthesized and regulated, the role of iodine in thyroid hormone synthesis, and finally how the thyroid hormones are released from the thyroid gland. It then progresses to detail areas within the thyroid system where disruption could occur or is already known to occur. It describes how thyroid hormone is transported in the serum and into the tissues on a cellular level, and how thyroid hormone is metabolized. There is an in-depth description of the alpha and beta thyroid hormone receptors and their functions, including how they are regulated, and what has been learned from the receptor knockout mouse models. The nongenomic actions of thyroid hormone are also described, such as in glucose uptake, mitochondrial effects, and its role in actin polymerization and vesicular recycling. The article discusses the concept of compensation within the HPT axis and how this fits into the paradigms that exist in thyroid toxicology/endocrinology. There is a section on thyroid hormone and its role in mammalian development: specifically, how it affects brain development when there is disruption to the maternal, the fetal, the newborn (congenital), or the infant thyroid system. Thyroid function during pregnancy is critical to normal development of the fetus, and several spontaneous mutant mouse lines are described that provide research tools to understand the mechanisms of thyroid hormone during mammalian brain development. Overall this article provides a basic understanding of the thyroid system and its components. The complexity of the thyroid system is clearly demonstrated, as are new areas of research on thyroid hormone physiology and thyroid hormone action developing within the field of thyroid endocrinology. This review provides the background necessary to review the current assays and endpoints described in the following articles for rodents, fishes, amphibians, and birds.", "title": "" }, { "docid": "8437f899a40cf54489b8e86870c32616", "text": "Lifelong machine learning (or lifelong learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Furthermore, we are able to discover new problems in the usage process of the learned knowledge or model. This enables us to learn more and more continually in a self-motivated manner. We can also adapt our previous konwledge to solve unfamilar problems and learn in the process. Lifelong learning aims to achieve these capabilities. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to a new height. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.", "title": "" }, { "docid": "d063f8a20e2b6522fe637794e27d7275", "text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.", "title": "" }, { "docid": "7c7801d472e3a03986ec4000d9d86ca8", "text": "The purpose of this study is to examine structural relationships among the capabilities, processes, and performance of knowledge management, and suggest strategic directions for the successful implementation of knowledge management. To serve this purpose, the authors conducted an extensive survey of 68 knowledge management-adopting Korean firms in diverse industries and collected 215 questionnaires. Analyzing hypothesized structural relationships with the data collected, they found that there exists statistically significant relationships among knowledge management capabilities, processes, and performance. The empirical results of this study also support the wellknown strategic hypothesis of the balanced scorecard (BSC). © 2007 Wiley Periodicals, Inc.", "title": "" }, { "docid": "00de76b9a27182c5551598871326f6b2", "text": "The development of computational thinking skills through computer programming is a major topic in education, as governments around the world are introducing these skills in the school curriculum. In consequence, educators and students are facing this discipline for the first time. Although there are many technologies that assist teachers and learners in the learning of this competence, there is a lack of tools that support them in the assessment tasks. This paper compares the computational thinking score provided by Dr. Scratch, a free/libre/open source software assessment tool for Scratch, with McCabe's Cyclomatic Complexity and Halstead's metrics, two classic software engineering metrics that are globally recognized as a valid measurement for the complexity of a software system. The findings, which prove positive, significant, moderate to strong correlations between them, could be therefore considered as a validation of the complexity assessment process of Dr. Scratch.", "title": "" }, { "docid": "ffa5ae359807884c2218b92d2db2a584", "text": "We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources. To our knowledge, this is the first machine learning-based method specifically for classifying consumer health questions. We demonstrate how previous approaches to medical question classification are insufficient to achieve high accuracy on this task. Additionally, we describe, manually annotate, and automatically classify three important question elements that improve question classification over previous techniques. Our results and analysis illustrate the difficulty of the task and the future directions that are necessary to achieve high-performing consumer health question classification.", "title": "" }, { "docid": "9147cc4e2d26cea9c7d90b9e9dfee7a0", "text": "We investigate the expressiveness of the microfacet model for isotropic bidirectional reflectance distribution functions (BRDFs) measured from real materials by introducing a non-parametric factor model that represents the model’s functional structure but abandons restricted parametric formulations of its factors. We propose a new objective based on compressive weighting that controls rendering error in high-dynamic-range BRDF fits better than previous factorization approaches. We develop a simple numerical procedure to minimize this objective and handle dependencies that arise between microfacet factors. Our method faithfully captures a more comprehensive set of materials than previous state-of-the-art parametric approaches yet remains compact (3.2KB per BRDF). We experimentally validate the benefit of the microfacet model over a naïve orthogonal factorization and show that fidelity for diffuse materials is modestly improved by fitting an unrestricted shadowing/masking factor. We also compare against a recent data-driven factorization approach [Bilgili et al. 2011] and show that our microfacet-based representation improves rendering accuracy for most materials while reducing storage by more than 10 ×.", "title": "" }, { "docid": "d5130b0353dd05e6a0e6e107c9b863e0", "text": "We study Euler–Poincaré systems (i.e., the Lagrangian analogue of LiePoisson Hamiltonian systems) defined on semidirect product Lie algebras. We first give a derivation of the Euler–Poincaré equations for a parameter dependent Lagrangian by using a variational principle of Lagrange d’Alembert type. Then we derive an abstract Kelvin-Noether theorem for these equations. We also explore their relation with the theory of Lie-Poisson Hamiltonian systems defined on the dual of a semidirect product Lie algebra. The Legendre transformation in such cases is often not invertible; thus, it does not produce a corresponding Euler–Poincaré system on that Lie algebra. We avoid this potential difficulty by developing the theory of Euler–Poincaré systems entirely within the Lagrangian framework. We apply the general theory to a number of known examples, including the heavy top, ideal compressible fluids and MHD. We also use this framework to derive higher dimensional Camassa-Holm equations, which have many potentially interesting analytical properties. These equations are Euler-Poincaré equations for geodesics on diffeomorphism groups (in the sense of the Arnold program) but where the metric is H rather than L. ∗Research partially supported by NSF grant DMS 96–33161. †Research partially supported by NSF Grant DMS-9503273 and DOE contract DE-FG0395ER25245-A000.", "title": "" }, { "docid": "d7dc0dd72295a5c8e49afb4ed3bb763f", "text": "Many significant sources of error take place in the smart antenna system like mismatching between the supposed steering vectors and the real vectors, insufficient calibration of array antenna, etc. These errors correspond to adding spatially white noise to each element of the array antenna, therefore the performance of the smart antenna falls and the desired output signal is destroyed. This paper presents a performance study of a smart antenna system at different noise levels using five adaptive beamforming algorithms and compares between them. The investigated algorithms are Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Sample Matrix Inversion (SMI), Recursive Least Square (RLS) and Hybrid Least Mean Square / Sample Matrix Inversion (LMS/SMI). MATLAB simulation results are illustrated to investigate the performance of these algorithms.", "title": "" }, { "docid": "d10ec03d91d58dd678c995ec1877c710", "text": "Major depressive disorders, long considered to be of neurochemical origin, have recently been associated with impairments in signaling pathways that regulate neuroplasticity and cell survival. Agents designed to directly target molecules in these pathways may hold promise as new therapeutics for depression.", "title": "" }, { "docid": "c1af668bdeeda5871e3bc6a602f022e6", "text": "Within the parallel computing domain, field programmable gate arrays (FPGA) are no longer restricted to their traditional role as substitutes for application-specific integrated circuits-as hardware \"hidden\" from the end user. Several high performance computing vendors offer parallel re configurable computers employing user-programmable FPGAs. These exciting new architectures allow end-users to, in effect, create reconfigurable coprocessors targeting the computationally intensive parts of each problem. The increased capability of contemporary FPGAs coupled with the embarrassingly parallel nature of the Jacobi iterative method make the Jacobi method an ideal candidate for hardware acceleration. This paper introduces a parameterized design for a deeply pipelined, highly parallelized IEEE 64-bit floating-point version of the Jacobi method. A Jacobi circuit is implemented using a Xilinx Virtex-II Pro as the target FPGA device. Implementation statistics and performance estimates are presented.", "title": "" }, { "docid": "3bb6f64769a92fce9fa0b33fd654bc88", "text": "The passive dynamic walker (PDW) has a remarkable characteristic that it realizes cyclic locomotion without planning the joint trajectories. However, it cannot control the walking behavior because it is dominated by the fixed body dynamics. Observing the human cyclic locomotion emerged by elastic muscles, we add the compliant hip joint on PDW, and we propose a \"phasic dynamics tuner\" that changes the body dynamics by tuning the joint compliance in order to control the walking behavior. The joint compliance is obtained by driving the joint utilizing antagonistic and agonistic McKibben pneumatic actuators. This paper shows that PDW with the compliant joint and the phasic dynamics tuner enhances the walking performance than present PDW with passive free joints. The phasic dynamics tuner can change the walking velocity by tuning the joint compliance. Experimental results show the effectiveness of the joint compliance and the phasic dynamics tuner.", "title": "" } ]
scidocsrr
fcbebd940f001b306b7f68486b0a7c77
Expression: Visualizing Affective Content from Social Streams
[ { "docid": "88e535a63f5c594edb18167ec8a78750", "text": "Finding the weakness of the products from the customers’ feedback can help manufacturers improve their product quality and competitive strength. In recent years, more and more people express their opinions about products online, and both the feedback of manufacturers’ products or their competitors’ products could be easily collected. However, it’s impossible for manufacturers to read every review to analyze the weakness of their products. Therefore, finding product weakness from online reviews becomes a meaningful work. In this paper, we introduce such an expert system, Weakness Finder, which can help manufacturers find their product weakness from Chinese reviews by using aspects based sentiment analysis. An aspect is an attribute or component of a product, such as price, degerm, moisturizing are the aspects of the body wash products. Weakness Finder extracts the features and groups explicit features by using morpheme based method and Hownet based similarity measure, and identify and group the implicit features with collocation selection method for each aspect. Then utilize sentence based sentiment analysis method to determine the polarity of each aspect in sentences. The weakness of product could be found because the weakness is probably the most unsatisfied aspect in customers’ reviews, or the aspect which is more unsatisfied when compared with their competitor’s product reviews. Weakness Finder has been used to help a body wash manufacturer find their product weakness, and our experimental results demonstrate the good performance of the Weakness Finder. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6081f8b819133d40522a4698d4212dfc", "text": "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "title": "" }, { "docid": "ae5142ef32fde6096ea4e4a41ba60cb6", "text": "Social media is playing a growing role in elections world-wide. Thus, automatically analyzing electoral tweets has applications in understanding how public sentiment is shaped, tracking public sentiment and polarization with respect to candidates and issues, understanding the impact of tweets from various entities, etc. Here, for the first time, we automatically annotate a set of 2012 US presidential election tweets for a number of attributes pertaining to sentiment, emotion, purpose, and style by crowdsourcing. Overall, more than 100,000 crowdsourced responses were obtained for 13 questions on emotions, style, and purpose. Additionally, we show through an analysis of these annotations that purpose, even though correlated with emotions, is significantly different. Finally, we describe how we developed automatic classifiers, using features from state-of-the-art sentiment analysis systems, to predict emotion and purpose labels, respectively, in new unseen tweets. These experiments establish baseline results for automatic systems on this new data.", "title": "" } ]
[ { "docid": "b6508d1f2b73b90a0cfe6399f6b44421", "text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.", "title": "" }, { "docid": "6a65623ddcf2f056cd35724d16805e8f", "text": "641 It has been over two decades since the discovery of quantum tele­ portation, in what is arguably one of the most interesting and exciting implications of the ‘weirdness’ of quantum mechanics. Prior to this landmark discovery, the fascinating idea of teleporta­ tion belonged in the realm of science fiction. First coined in 1931 by Charles H. Fort1, the term ‘teleportation’ has since been used to refer to the process by which bodies and objects are transferred from one location to another, without actually making the jour­ ney along the way. Since then it has become a fixture of pop cul­ ture, perhaps best exemplified by Star Trek’s celebrated catchphrase “Beam me up, Scotty.” In 1993, a seminal paper2 described a quantum information protocol, dubbed quantum teleportation, that shares several of the above features. In this protocol, an unknown quantum state of a physical system is measured and subsequently reconstructed or ‘reassembled’ at a remote location (the physical constituents of the original system remain at the sending location). This process requires classical communication and excludes superluminal com­ munication. Most importantly, it requires the resource of quantum entanglement3,4. Indeed, quantum teleportation can be seen as the protocol in quantum information that most clearly demonstrates the character of quantum entanglement as a resource: without its presence, such a quantum state transfer would not be possible within the laws of quantum mechanics. Quantum teleportation plays an active role in the progress of quantum information science5–8. On the one hand, it is a concep­ tual protocol that is crucial in the development of formal quantum information theory; on the other, it represents a fundamental ingre­ dient to the development of many quantum technologies. Quantum repeaters9, quantum gate teleportation10, measurement­based quan­ tum computing11 and port­based teleportation12 all derive from the basic scheme of quantum teleportation. The vision of a quantum network13 draws inspiration from this scheme. Teleportation has also been used as a simple tool for exploring ‘extreme’ physics, such as closed time­like curves14. Today, quantum teleportation has been achieved in laboratories around the world using a variety of different substrates and technolo­ gies, including photonic qubits (light polarization15–21, single rail22,23, dual rails24,25, time­bin26–28 and spin­orbital qubits29), nuclear mag­ netic resonance (NMR)30, optical modes31–39, atomic ensembles40–43, Advances in quantum teleportation", "title": "" }, { "docid": "74235290789c24ce00d54541189a4617", "text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.", "title": "" }, { "docid": "f06e1cd245863415531e65318c97f96b", "text": "In this paper, we propose a new joint dictionary learning method for example-based image super-resolution (SR), using sparse representation. The low-resolution (LR) dictionary is trained from a set of LR sample image patches. Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches. The error criterion used here is the mean square error. In this way we guarantee that the HR patches have the same sparse representation over HR dictionary as the LR patches over the LR dictionary, and at the same time, these sparse representations can well reconstruct the HR patches. Simulation results show the effectiveness of our method compared to the state-of-art SR algorithms.", "title": "" }, { "docid": "117c66505964344d9c350a4e57a4a936", "text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.", "title": "" }, { "docid": "4f096ba7fc6164cdbf5d37676d943fa8", "text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.", "title": "" }, { "docid": "e27575b8d7a7455f1a8f941adb306a04", "text": "Seung-Joon Yi GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yiseung@seas.upenn.edu Stephen G. McGill GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: smcgill3@seas.upenn.edu Larry Vadakedathu GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: vlarry@seas.upenn.edu Qin He GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: heqin@seas.upenn.edu Inyong Ha Robotis, Seoul, Korea e-mail: dudung@robotis.com Jeakweon Han Robotis, Seoul, Korea e-mail: jkhan@robotis.com Hyunjong Song Robotis, Seoul, Korea e-mail: hjsong@robotis.com Michael Rouleau RoMeLa, Virginia Tech, Blacksburg, Virginia 24061 e-mail: mrouleau@vt.edu Byoung-Tak Zhang BI Lab, Seoul National University, Seoul, Korea e-mail: btzhang@bi.snu.ac.kr Dennis Hong RoMeLa, University of California, Los Angeles, Los Angeles, California 90095 e-mail: dennishong@ucla.edu Mark Yim GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: yim@seas.upenn.edu Daniel D. Lee GRASP Lab, University of Pennsylvania, Philadelphia, Pennsylvania 19104 e-mail: ddlee@seas.upenn.edu", "title": "" }, { "docid": "4d2986dffedadfd425505f9e25c5f6cb", "text": "BACKGROUND\nThe use of heart rate variability (HRV) in the management of sport training is a practice which tends to spread, especially in order to prevent the occurrence of states of fatigue.\n\n\nOBJECTIVE\nTo estimate the HRV parameters obtained using a heart rate recording, according to different loads of sporting activities, and to make the possible link with the appearance of fatigue.\n\n\nMETHODS\nEight young football players, aged 14.6 years+/-2 months, playing at league level in Rhône-Alpes, training for 10 to 20 h per week, were followed over a period of 5 months, allowing to obtain 54 recordings of HRV in three different conditions: (i) after rest (ii) after a day with training and (iii) after a day with a competitive match.\n\n\nRESULTS\nUnder the effect of a competitive match, the HRV temporal indicators (heart rate, RR interval, and pNN50) were significantly altered compared to the rest day. The analysis of the sympathovagal balance rose significantly as a result of the competitive constraint (0.72+/-0.17 vs. 0.90+/-0.20; p<0.05).\n\n\nCONCLUSION\nThe main results obtained show that the HRV is an objective and non-invasive monitoring of management of the training of young sportsmen. HRV analysis allowed to highlight any neurovegetative adjustments according to the physical loads. Thus, under the effect of an increase of physical and psychological constraints that a football match represents, the LF/HF ratio rises significantly; reflecting increased sympathetic stimulation, which beyond certain limits could be relevant to prevent the emergence of a state of fatigue.", "title": "" }, { "docid": "5236f684bc0fdf11855a439c9d3256f6", "text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.", "title": "" }, { "docid": "373f0adcc61c010f85bd3839e6bd0fca", "text": "Clusters in document streams, such as online news articles, can be induced by their textual contents, as well as by the temporal dynamics of their arriving patterns. Can we leverage both sources of information to obtain a better clustering of the documents, and distill information that is not possible to extract using contents only? In this paper, we propose a novel random process, referred to as the Dirichlet-Hawkes process, to take into account both information in a unified framework. A distinctive feature of the proposed model is that the preferential attachment of items to clusters according to cluster sizes, present in Dirichlet processes, is now driven according to the intensities of cluster-wise self-exciting temporal point processes, the Hawkes processes. This new model establishes a previously unexplored connection between Bayesian Nonparametrics and temporal Point Processes, which makes the number of clusters grow to accommodate the increasing complexity of online streaming contents, while at the same time adapts to the ever changing dynamics of the respective continuous arrival time. We conducted large-scale experiments on both synthetic and real world news articles, and show that Dirichlet-Hawkes processes can recover both meaningful topics and temporal dynamics, which leads to better predictive performance in terms of content perplexity and arrival time of future documents.", "title": "" }, { "docid": "d9b19dd523fd28712df61384252d331c", "text": "Purpose – The purpose of this paper is to examine the ways in which governments build social media and information and communication technologies (ICTs) into e-government transparency initiatives, to promote collaboration with members of the public and the ways in members of the public are able to employ the same social media to monitor government activities. Design/methodology/approach – This study used an iterative strategy that involved conducting a literature review, content analysis, and web site analysis, offering multiple perspectives on government transparency efforts, the role of ICTs and social media in these efforts, and the ability of e-government initiatives to foster collaborative transparency through embedded ICTs and social media. Findings – The paper identifies key initiatives, potential impacts, and future challenges for collaborative e-government as a means of transparency. Originality/value – The paper is one of the first to examine the interrelationships between ICTs, social media, and collaborative e-government to facilitate transparency.", "title": "" }, { "docid": "8f07b133447700536c15edb97d4d8c38", "text": "Author Title Annotation Domain Genre Caesar De Bello Gallico (BG) ~59,000 wd Source Historiography Pliny Epistulae (Ep) ~18,500 wd Target-1 Letters Ovid Ars Amatoria (AA) ~17,500 wd Target-2 Elegiac Poetry • Active Learning • Maximize improvement rate per additional sentence annotated • Provide user with realistic expectations • Predict expected accuracy gain per sentence annotated • User input augments training data, improves domain coverage", "title": "" }, { "docid": "5d150ffc94f7489f19bf4004fabf4f9c", "text": "Multi objective optimization is a promising field which is increasingly being encountered in many areas worldwide. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used to solve Multi objective problems. Various multiobjective evolutionary algorithms have been developed. Their principal reason for development is their ability to find multiple Pareto optimal solution in single run. Their Basic motive of evolutionary multiobjective optimization in contrast to singleobjective optimization was optimality, decision making algorithm design (fitness, diversity, and elitism), constraints, and preference. The goal of this paper is to trace the genealogy & review the state of the art of evolutionary multiobjective optimization algorithms.", "title": "" }, { "docid": "43121c7d44b3ad134a2a8ad42b1d43ef", "text": "Web services are emerging technologies to reuse software as services over the Internet by wrapping underlying computing models with XML. Web services are rapidly evolving and are expected to change the paradigms of both software development and use. This panel will discuss the current status and challenges of Web services technologies.", "title": "" }, { "docid": "a97151d20ac0f25e1897b9e66eb77e9b", "text": "In this paper, we propose a novel system-Intelligent Personalized Fashion Recommendation System, which creates a new space in web multimedia mining and recommendation. The proposed system significantly helps customers find their most suitable fashion choices in mass fashion information in the virtual space based on multimedia mining. There are three stand-alone models developed in this paper to optimize the analysis of fashion features in mass fashion trend: (i). Interaction and recommender model, which associated clients' personalized demand with the current fashion trend, and helps clients find the most favorable fashion factors in trend. (ii). Evolutionary hierachical fashion multimedia mining model, which creates a hierachical structure to filer the key components of fashion multimedia information in the virtual space, and it proves to be more efficient for web mass multimedia mining in an evolutionary way. (iii). Color tone analysis model, a relevant and straightforward approach for analysis of main color tone as to the skin and clothing is used. In this model, a refined contour extraction of the fashion model method is also developed to solve the dilemma that the accuracy and efficiency of contour extraction in the dynamic and complex video scene. As evidenced by the experiment, the proposed system outperforms in effectiveness on mass fashion information in the virtual space compared with human, and thus developing a personalized and diversified way for fashion recommendation.", "title": "" }, { "docid": "5603dc3ceba1a270506116eaf32377bb", "text": "OBJECTIVE\nEating at \"fast food\" restaurants has increased and is linked to obesity. This study examined whether living or working near \"fast food\" restaurants is associated with body weight.\n\n\nMETHODS\nA telephone survey of 1033 Minnesota residents assessed body height and weight, frequency of eating at restaurants, and work and home addresses. Proximity of home and work to restaurants was assessed by Global Index System (GIS) methodology.\n\n\nRESULTS\nEating at \"fast food\" restaurants was positively associated with having children, a high fat diet and Body Mass Index (BMI). It was negatively associated with vegetable consumption and physical activity. Proximity of \"fast food\" restaurants to home or work was not associated with eating at \"fast food\" restaurants or with BMI. Proximity of \"non-fast food\" restaurants was not associated with BMI, but was associated with frequency of eating at those restaurants.\n\n\nCONCLUSION\nFailure to find relationships between proximity to \"fast food\" restaurants and obesity may be due to methodological weaknesses, e.g. the operational definition of \"fast food\" or \"proximity\", or homogeneity of restaurant proximity. Alternatively, the proliferation of \"fast food\" restaurants may not be a strong unique cause of obesity.", "title": "" }, { "docid": "82c327ecd5402e7319ecaa416dc8e008", "text": "The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.", "title": "" }, { "docid": "726c2879354eadc44961ab40c4b1621d", "text": "This paper provides detailed information on Team Poland’s approach in the electricity price forecasting track of GEFCom2014. A new hybrid model is proposed, consisting of four major blocks: point forecasting, pre-filtering, quantile regression modeling and post-processing. This universal model structure enables independent development of a single block, without affecting performance of the remaining ones. The four-block model design in complemented by including expert judgements, which may be of great importance in periods of unusually high or low electricity demand.", "title": "" }, { "docid": "9d2583618e9e00333d044ac53da65ceb", "text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.", "title": "" }, { "docid": "ac808ecd75ccee74fff89d03e3396f26", "text": "This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint. Keywords—Agricultural engineering, computer vision, image processing, flower detection.", "title": "" } ]
scidocsrr
3f08135a0ac8e14303e5c2eb7c87a5ec
Automatic knowledge extraction from documents
[ { "docid": "d67ab983c681136864f4a66c5b590080", "text": "scoring in DeepQA C. Wang A. Kalyanpur J. Fan B. K. Boguraev D. C. Gondek Detecting semantic relations in text is an active problem area in natural-language processing and information retrieval. For question answering, there are many advantages of detecting relations in the question text because it allows background relational knowledge to be used to generate potential answers or find additional evidence to score supporting passages. This paper presents two approaches to broad-domain relation extraction and scoring in the DeepQA question-answering framework, i.e., one based on manual pattern specification and the other relying on statistical methods for pattern elicitation, which uses a novel transfer learning technique, i.e., relation topics. These two approaches are complementary; the rule-based approach is more precise and is used by several DeepQA components, but it requires manual effort, which allows for coverage on only a small targeted set of relations (approximately 30). Statistical approaches, on the other hand, automatically learn how to extract semantic relations from the training data and can be applied to detect a large amount of relations (approximately 7,000). Although the precision of the statistical relation detectors is not as high as that of the rule-based approach, their overall impact on the system through passage scoring is statistically significant because of their broad coverage of knowledge.", "title": "" } ]
[ { "docid": "00bd59f93d3f5e69dbffad87e5b6e711", "text": "In this paper, a Bayesian approach to tracking a single target of interest (TOI) using passive sonar is presented. The TOI is assumed to be in the presence of other loud interfering targets, or interferers. To account for the interferers, a single-signal likelihood function (SSLF) is proposed which uses maximum-likelihood estimates (MLEs) in place of nuisance parameters. Since there is uncertainty in signal origin, we propose a computationally efficient method for computing association probabilities. The final proposed SSLF accounts for sidelobe interference from other signals, reflects the uncertainty caused by the array beampattern, is signal-to-noise ratio (SNR) dependent, and reflects uncertainty caused by unknown signal origin. Various examples are considered, which include moving and stationary targets. For the examples, the sensors are assumed to be uniformly spaced linear arrays. The arrays may be stationary or moving and there may be one or more present.", "title": "" }, { "docid": "ba4c8b593db6991507853bb6c8759aea", "text": "This paper proposes an accurate four-transistor temperature sensor designed, and developed, for thermal testing and monitoring circuits in deep submicron technologies. A previous three-transistor temperature sensor, which utilizes the temperature characteristic of the threshold voltage, shows highly linear characteristics at a power supply voltage of 1.8 V or more; however, the supply voltage is reduced to 1 V in a 90-nm CMOS process. Since the temperature coefficient of the operating point's current at a 1-V supply voltage is steeper than the coefficient at a 1.8-V supply voltage, the operating point's current at high temperature becomes quite small and the output voltage goes into the subthreshold region or the cutoff region. Therefore, the operating condition of the conventional temperature sensor cannot be satisfied at 1-V supply and this causes degradation of linearity. To improve linearity at a 1-V supply voltage, one transistor is added to the conventional sensor. This additional transistor, which works in the saturation region, changes the temperature coefficient gradient of the operating point's current and moves the operating points at each temperature to appropriate positions within the targeted temperature range. The sensor features an extremely small area of 11.6times4.1 mum2 and low power consumption of about 25 muW. The performance of the sensor is highly linear and the predicted temperature error is merely -1.0 to +0.8degC using a two-point calibration within the range of 50degC to 125degC. The sensor has been implemented in the ASPLA CMOS 90-nm 1P7M process and has been tested successfully with a supply voltage of 1 V.", "title": "" }, { "docid": "7abdb102a876d669bdf254f7d91121c1", "text": "OBJECTIVE\nRegular physical activity (PA) is important for maintaining long-term physical, cognitive, and emotional health. However, few older adults engage in routine PA, and even fewer take advantage of programs designed to enhance PA participation. Though most managed Medicare members have free access to the Silver Sneakers and EnhanceFitness PA programs, the vast majority of eligible seniors do not utilize these programs. The goal of this qualitative study was to better understand the barriers to and facilitators of PA and participation in PA programs among older adults.\n\n\nDESIGN\nThis was a qualitative study using focus group interviews.\n\n\nSETTING\nFocus groups took place at three Group Health clinics in King County, Washington.\n\n\nPARTICIPANTS\nFifty-two randomly selected Group Health Medicare members between the ages of 66 to 78 participated.\n\n\nMETHODS\nWe conducted four focus groups with 13 participants each. Focus group discussions were audio-recorded, transcribed, and analyzed using an inductive thematic approach and a social-ecological framework.\n\n\nRESULTS\nMen and women were nearly equally represented among the participants, and the sample was largely white (77%), well-educated (69% college graduates), and relatively physically active. Prominent barriers to PA and PA program participation were physical limitations due to health conditions or aging, lack of professional guidance, and inadequate distribution of information on available and appropriate PA options and programs. Facilitators included the motivation to maintain physical and mental health and access to affordable, convenient, and stimulating PA options.\n\n\nCONCLUSION\nOlder adult populations may benefit from greater support and information from their providers and health care systems on how to safely and successfully improve or maintain PA levels through later adulthood. Efforts among health care systems to boost PA among older adults may need to consider patient-centered adjustments to current PA programs, as well as alternative methods for promoting overall active lifestyle choices.", "title": "" }, { "docid": "b1d61ca503702f950ef1275b904850e7", "text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.", "title": "" }, { "docid": "484a7acba548ef132d83fc9931a45071", "text": "This paper is focused on tracking control for a rigid body payload, that is connected to an arbitrary number of quadrotor unmanned aerial vehicles via rigid links. An intrinsic form of the equations of motion is derived on the nonlinear configuration manifold, and a geometric controller is constructed such that the payload asymptotically follows a given desired trajectory for its position and attitude. The unique feature is that the coupled dynamics between the rigid body payload, links, and quadrotors are explicitly incorporated into control system design and stability analysis. These are developed in a coordinate-free fashion to avoid singularities and complexities that are associated with local parameterizations. The desirable features of the proposed control system are illustrated by a numerical example.", "title": "" }, { "docid": "f56bac3cb4ea99626afa51907e909fa3", "text": "An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.", "title": "" }, { "docid": "324c6f4592ed201aebdb4a1a87740984", "text": "In this paper, we propose the Electric Vehicle Routing Problem with Time Windows and Mixed Fleet (E-VRPTWMF) to optimize the routing of a mixed fleet of electric commercial vehicles (ECVs) and conventional internal combustion commercial vehicles (ICCVs). Contrary to existing routing models for ECVs, which assume energy consumption to be a linear function of traveled distance, we utilize a realistic energy consumption model that incorporates speed, gradient and cargo load distribution. This is highly relevant in the context of ECVs because energy consumption determines the maximal driving range of ECVs and the recharging times at stations. To address the problem, we develop an Adaptive Large Neighborhood Search algorithm that is enhanced by a local search for intensification. In numerical studies on newly designed E-VRPTWMF test instances, we investigate the effect of considering the actual load distribution on the structure and quality of the generated solutions. Moreover, we study the influence of different objective functions on solution attributes and on the contribution of ECVs to the overall routing costs. Finally, we demonstrate the performance of the developed algorithm on benchmark instances of the related problems VRPTW and E-VRPTW.", "title": "" }, { "docid": "02621546c67e6457f350d0192b616041", "text": "Binary embedding of high-dimensional data requires long codes to preserve the discriminative power of the input space. Traditional binary coding methods often suffer from very high computation and storage costs in such a scenario. To address this problem, we propose Circulant Binary Embedding (CBE) which generates binary codes by projecting the data with a circulant matrix. The circulant structure enables the use of Fast Fourier Transformation to speed up the computation. Compared to methods that use unstructured matrices, the proposed method improves the time complexity from O(d) to O(d log d), and the space complexity from O(d) to O(d) where d is the input dimensionality. We also propose a novel time-frequency alternating optimization to learn data-dependent circulant projections, which alternatively minimizes the objective in original and Fourier domains. We show by extensive experiments that the proposed approach gives much better performance than the state-of-the-art approaches for fixed time, and provides much faster computation with no performance degradation for fixed number of bits.", "title": "" }, { "docid": "49e574e30b35811205e55c582eccc284", "text": "Intracerebral hemorrhage (ICH) is a devastating disease with high rates of mortality and morbidity. The major risk factors for ICH include chronic arterial hypertension and oral anticoagulation. After the initial hemorrhage, hematoma expansion and perihematoma edema result in secondary brain damage and worsened outcome. A rapid onset of focal neurological deficit with clinical signs of increased intracranial pressure is strongly suggestive of a diagnosis of ICH, although cranial imaging is required to differentiate it from ischemic stroke. ICH is a medical emergency and initial management should focus on urgent stabilization of cardiorespiratory variables and treatment of intracranial complications. More than 90% of patients present with acute hypertension, and there is some evidence that acute arterial blood pressure reduction is safe and associated with slowed hematoma growth and reduced risk of early neurological deterioration. However, early optimism that outcome might be improved by the early administration of recombinant factor VIIa (rFVIIa) has not been substantiated by a large phase III study. ICH is the most feared complication of warfarin anticoagulation, and the need to arrest intracranial bleeding outweighs all other considerations. Treatment options for warfarin reversal include vitamin K, fresh frozen plasma, prothrombin complex concentrates, and rFVIIa. There is no evidence to guide the specific management of antiplatelet therapy-related ICH. With the exceptions of placement of a ventricular drain in patients with hydrocephalus and evacuation of a large posterior fossa hematoma, the timing and nature of other neurosurgical interventions is also controversial. There is substantial evidence that management of patients with ICH in a specialist neurointensive care unit, where treatment is directed toward monitoring and managing cardiorespiratory variables and intracranial pressure, is associated with improved outcomes. Attention must be given to fluid and glycemic management, minimizing the risk of ventilator-acquired pneumonia, fever control, provision of enteral nutrition, and thromboembolic prophylaxis. There is an increasing awareness that aggressive management in the acute phase can translate into improved outcomes after ICH.", "title": "" }, { "docid": "72f42589ab86c878517feaab5914cf65", "text": "This paper proposes an analytical-cum-conceptual framework for understanding the nature of institutions as well as their changes. First, it proposes a new definition of institution based on the notion of common knowledge regarding self-sustaining features of social interactions with a hope to integrate various disciplinary approaches to institutions and their changes. Second, it specifies some generic mechanisms of institutional coherence and change -overlapping social embeddedness, Schumpeterian innovation in bundling games and dynamic institutional complementarities -useful for understanding the dynamic interactions of economic, political, social, organizational and cognitive factors.", "title": "" }, { "docid": "0d3119ef15fb65e75a6fcb355d1efc5a", "text": "A battery management system (BMS) is a system that manages a rechargeable battery (cell or battery pack), by protecting the battery to operate beyond its safe limits and monitoring its state of charge (SoC) & state of health (SoH). BMS has been the essential integral part of hybrid electrical vehicles (HEVs) & electrical vehicles (EVs). BMS provides safety to the system and user with run time monitoring of battery for any critical hazarder conditions. In the present work, design & simulation of BMS for EVs is presented. The entire model of BMS & all other functional blocks of BMS are implemented in Simulink toolbox of MATLAB R2012a. The BMS presented in this research paper includes Neural Network Controller (NNC), Fuzzy Logic Controller (FLC) & Statistical Model. The battery parameters required to design and simulate the BMS are extracted from the experimental results and incorporated in the model. The Neuro-Fuzzy approach is used to model the electrochemical behavior of the Lead-acid battery (selected for case study) then used to estimate the SoC. The Statistical model is used to address battery's SoH. Battery cycle test results have been used for initial model design, Neural Network training and later; it is transferred to the design & simulation of BMS using Simulink. The simulation results are validated by experimental results and MATLAB/Simulink simulation. This model provides more than 97% accuracy in SoC and reasonably accurate SoH.", "title": "" }, { "docid": "82edffdadaee9ac0a5b11eb686e109a1", "text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.", "title": "" }, { "docid": "34b3c5ee3ea466c23f5c7662f5ce5b33", "text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.", "title": "" }, { "docid": "e7bd7e17d90813d60ca147affb25644d", "text": "The absence of a comprehensive database of locations where bacteria live is an important obstacle for biologists to understand and study the interactions between bacteria and their habitats. This paper reports the results to a challenge, set forth by the Bacteria Biotopes Task of the BioNLP Shared Task 2013. Two systems are explained: Sub-task 1 system for identifying habitat mentions in unstructured biomedical text and normalizing them through the OntoBiotope ontology and Sub-task 2 system for extracting localization and partof relations between bacteria and habitats. Both approaches rely on syntactic rules designed by considering the shallow linguistic analysis of the text. Sub-task 2 system also makes use of discourse-based rules. The two systems achieve promising results on the shared task test data set.", "title": "" }, { "docid": "7e2b47f3b8fb0dfcef2ea010fab4ba48", "text": "The purpose of this study is to provide evidence-based and expert consensus recommendations for lung ultrasound with focus on emergency and critical care settings. A multidisciplinary panel of 28 experts from eight countries was involved. Literature was reviewed from January 1966 to June 2011. Consensus members searched multiple databases including Pubmed, Medline, OVID, Embase, and others. The process used to develop these evidence-based recommendations involved two phases: determining the level of quality of evidence and developing the recommendation. The quality of evidence is assessed by the grading of recommendation, assessment, development, and evaluation (GRADE) method. However, the GRADE system does not enforce a specific method on how the panel should reach decisions during the consensus process. Our methodology committee decided to utilize the RAND appropriateness method for panel judgment and decisions/consensus. Seventy-three proposed statements were examined and discussed in three conferences held in Bologna, Pisa, and Rome. Each conference included two rounds of face-to-face modified Delphi technique. Anonymous panel voting followed each round. The panel did not reach an agreement and therefore did not adopt any recommendations for six statements. Weak/conditional recommendations were made for 2 statements, and strong recommendations were made for the remaining 65 statements. The statements were then recategorized and grouped to their current format. Internal and external peer-review processes took place before submission of the recommendations. Updates will occur at least every 4 years or whenever significant major changes in evidence appear. This document reflects the overall results of the first consensus conference on “point-of-care” lung ultrasound. Statements were discussed and elaborated by experts who published the vast majority of papers on clinical use of lung ultrasound in the last 20 years. Recommendations were produced to guide implementation, development, and standardization of lung ultrasound in all relevant settings.", "title": "" }, { "docid": "a12422abe3e142b83f5f242dc754cca1", "text": "Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.", "title": "" }, { "docid": "3dab0441ca1e4fb39296be8006611690", "text": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user's interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.", "title": "" }, { "docid": "5623ce7ffce8492d637d52975df3ac99", "text": "The online advertising industry is currently based on two dominant business models: the pay-per-impression model and the pay-per-click model. With the growth of sponsored search during the last few years, there has been a move toward the pay-per-click model as it decreases the risk to small advertisers. An alternative model, discussed but not widely used in the advertising industry, is pay-per-conversion, or more generally, pay-per-action. In this paper, we discuss mechanisms for the pay-per-action model and various challenges involved in designing such mechanisms.", "title": "" }, { "docid": "24e943940f1bd1328dba1de2e15d3137", "text": "The use of external databases to generate training data, also known as Distant Supervision, has become an effective way to train supervised relation extractors but this approach inherently suffers from noise. In this paper we propose a method for noise reduction in distantly supervised training data, using a discriminative classifier and semantic similarity between the contexts of the training examples. We describe an active learning strategy which exploits hierarchical clustering of the candidate training samples. To further improve the effectiveness of this approach, we study the use of several methods for dimensionality reduction of the training samples. We find that semantic clustering of training data combined with cluster-based active learning allows filtering the training data, hence facilitating the creation of a clean training set for relation extraction, at a reduced manual labeling cost.", "title": "" }, { "docid": "0ebc0724a8c966e93e05fb7fce80c1ab", "text": "Firms in the financial services industry have been faced with the dramatic and relatively recent emergence of new technology innovations, and process disruptions. The industry as a whole, and many new fintech start-ups are looking for new pathways to successful business models, the creation of enhanced customer experience, and new approaches that result in services transformation. Industry and academic observers believe this to be more of a revolution than a set of less impactful changes, with financial services as a whole due for major improvements in efficiency, in customer centricity and informedness. The long-standing dominance of leading firms that are not able to figure out how to effectively hook up with the “Fintech Revolution” is at stake. This article presents a new fintech innovation mapping approach that enables the assessment of the extent to which there are changes and transformations in four key areas of the financial services industry. We discuss: (1) operations management in financial services, and the changes that are occurring there; (2) technology innovations that have begun to leverage the execution and stakeholder value associated with payments settlement, cryptocurrencies, blockchain technologies, and cross-border payment services; (3) multiple fintech innovations that have impacted lending and deposit services, peer-to-peer (P2P) lending and the use of social media; (4) issues with respect to investments, financial markets, trading, risk management, robo-advisory and related services that are influenced by blockchain and fintech innovations.", "title": "" } ]
scidocsrr
e84c8e4b16672d8baa4e370a4dead84d
Seq-NMS for Video Object Detection
[ { "docid": "5300e9938a545895c8b97fe6c9d06aa5", "text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.", "title": "" } ]
[ { "docid": "4a741431c708cd92a250bcb91e4f1638", "text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.", "title": "" }, { "docid": "9973dab94e708f3b87d52c24b8e18672", "text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.", "title": "" }, { "docid": "62da9a85945652f195086be0ef780827", "text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.", "title": "" }, { "docid": "23866b968903087ae9b2b18444a0720b", "text": "This paper presents a monocular vision based 3D bicycle tracking framework for intelligent vehicles based on a detection method exploiting a deformable part model and a tracking method using an Interacting Multiple Model (IMM) algorithm. Bicycle tracking is important because bicycles share the road with vehicles and can move at comparable speeds in urban environments. From a computer vision standpoint, bicycle detection is challenging as bicycle's appearance can change dramatically between viewpoints and a person riding on the bicycle is a non-rigid object. To this end, we present a tracking-by-detection method to detect and track bicycles that takes into account these difficult issues. First, a mixture model of multiple viewpoints is defined and trained via a Latent Support Vector Machine (LSVM) to detect bicycles under a variety of circumstances. Each model uses a part-based representation. This robust bicycle detector provides a series of measurements (i.e., bounding boxes) in the context of the Kalman filter. Second, to exploit the unique characteristics of bicycle tracking, two motion models based on bicycle's kinematics are fused using an IMM algorithm. For each motion model, an extended Kalman filter (EKF) is used to estimate the position and velocity of a bicycle in the vehicle coordinates. Finally, a single bicycle tracking method using an IMM algorithm is extended to that of multiple bicycle tracking by incorporating a Rao-Blackwellized Particle Filter which runs a particle filter for a data association and an IMM filter for each bicycle tracking. We demonstrate the effectiveness of this approach through a series of experiments run on a new bicycle dataset captured from a vehicle-mounted camera.", "title": "" }, { "docid": "4e7106a78dcf6995090669b9a25c9551", "text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.", "title": "" }, { "docid": "96b4e076448b9db96eae08620fdac98c", "text": "Incident Response has always been an important aspect of Information Security but it is often overlooked by security administrators. Responding to an incident is not solely a technical issue but has many management, legal, technical and social aspects that are presented in this paper. We propose a detailed management framework along with a complete structured methodology that contains best practices and recommendations for appropriately handling a security incident. We also present the state-of-the art technology in computer, network and software forensics as well as automated trace-back artifacts, schemas and protocols. Finally, we propose a generic Incident Response process within a corporate environment. © 2005 Elsevier Science. All rights reserved", "title": "" }, { "docid": "08bf0d5065ce44e4b15cd2a982f440d2", "text": "In this paper we present a hybrid approach for automatic composition of web services that generates semantic input-output based compositions with optimal end-to-end QoS, minimizing the number of services of the resulting composition. The proposed approach has four main steps: 1) generation of the composition graph for a request; 2) computation of the optimal composition that minimizes a single objective QoS function; 3) multi-step optimizations to reduce the search space by identifying equivalent and dominated services; and 4) hybrid local-global search to extract the optimal QoS with the minimum number of services. An extensive validation with the datasets of the Web Service Challenge 2009-2010 and randomly generated datasets shows that: 1) the combination of local and global optimization is a general and powerful technique to extract optimal compositions in diverse scenarios; and 2) the hybrid strategy performs better than the state-of-the-art, obtaining solutions with less services and optimal QoS.", "title": "" }, { "docid": "d6a6cadd782762e4591447b7dd2c870a", "text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.", "title": "" }, { "docid": "457f2508c59daaae9af818f8a6a963d1", "text": "Robotic systems hold great promise to assist with household, educational, and research tasks, but the difficulties of designing and building such robots often are an inhibitive barrier preventing their development. This paper presents a framework in which simple robots can be easily designed and then rapidly fabricated and tested, paving the way for greater proliferation of robot designs. The Python package presented in this work allows for the scripted generation of mechanical elements, using the principles of hierarchical structure and modular reuse to simplify the design process. These structures are then manufactured using an origami-inspired method in which precision cut sheets of plastic film are folded to achieve desired geometries. Using these processes, lightweight, low cost, rapidly built quadrotors were designed and fabricated. Flight tests compared the resulting robots against similar micro air vehicles (MAVs) generated using other processes. Despite lower tolerance and precision, robots generated using the process presented in this work took significantly less time and cost to design and build, and yielded lighter, lower power MAVs.", "title": "" }, { "docid": "39fc7b710a6d8b0fdbc568b48221de5d", "text": "The framework of cognitive wireless networks is expected to endow the wireless devices with the cognition-intelligence ability with which they can efficiently learn and respond to the dynamic wireless environment. In many practical scenarios, the complexity of network dynamics makes it difficult to determine the network evolution model in advance. Thus, the wireless decision-making entities may face a black-box network control problem and the model-based network management mechanisms will be no longer applicable. In contrast, model-free learning enables the decision-making entities to adapt their behaviors based on the reinforcement from their interaction with the environment and (implicitly) build their understanding of the system from scratch through trial-and-error. Such characteristics are highly in accordance with the requirement of cognition-based intelligence for devices in cognitive wireless networks. Therefore, model-free learning has been considered as one key implementation approach to adaptive, self-organized network control in cognitive wireless networks. In this paper, we provide a comprehensive survey on the applications of the state-of-the-art model-free learning mechanisms in cognitive wireless networks. According to the system models on which those applications are based, a systematic overview of the learning algorithms in the domains of single-agent system, multiagent systems, and multiplayer games is provided. The applications of model-free learning to various problems in cognitive wireless networks are discussed with the focus on how the learning mechanisms help to provide the solutions to these problems and improve the network performance over the model-based, non-adaptive methods. Finally, a broad spectrum of challenges and open issues is discussed to offer a guideline for the future research directions.", "title": "" }, { "docid": "9a5f5e43ac46255445268d4298af0a4c", "text": "Object removal is a topic highly involved in a wide range of image reconstruction applications such as restoration of corrupted or defected images, scene reconstruction, and film post-production. In recent years, there have been many efforts in the industry and academia to develop better algorithms for this subject. This paper discusses some of the recent work and various techniques currently adopted in this field and presents our algorithmic design that enhance the existing pixel-filling framework, and our post-inpaint refinement steps. This paper will further layout the implementation details and experimental results of our algorithm, on a mixture of images from both the standard image processing study papers and our own photo library. Results from our proposed methods will be evaluated and compared to the previous works in academia and other state-of-the-art approaches, with elaboration on the advantages and disadvantages. This paper will conclude with discussing some of the challenges encountered during the design and experiment phases and proposing potential steps to take in the future.", "title": "" }, { "docid": "76afcc3dfbb06f2796b61c8b5b424ad8", "text": "Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.", "title": "" }, { "docid": "e75ec4137b0c559a1c375d97993448b0", "text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.", "title": "" }, { "docid": "a3ace9ac6ae3f3d2dd7e02bd158a5981", "text": "The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. Thesis Supervisor: David R. Karger Title: Associate Professor", "title": "" }, { "docid": "db383295c34b919b2e2e859cfdf82fc2", "text": "Wafer level packages (WLPs) with various design configurations are rapidly gaining tremendous applications throughout semiconductor industry due to small-form factor, low-cost, and high performance. Because of the innovative production processes utilized in WLP manufacturing and the accompanying rise in the price of gold, the traditional wire bonding packages are no longer as attractive as they used to be. In addition, WLPs provide the smallest form factor to satisfy multifunctional device requirements along with improved signal integrity for today’s handheld electronics. Existing wire bonding devices can be easily converted to WLPs by adding a redistribution layer (RDL) during backend wafer level processing. Since the input/output (I/O) pads do not have to be routed to the perimeter of the die, the WLP die can be designed to have a much smaller footprint as compared to its wire bonding counterpart, which means more area-array dies can be packed onto a single wafer to reduce overall processing costs per die. Conventional (fan-in) WLPs are formed on the dies while they are still on the uncut wafer. The result is that the final packaged product is the same size as the die itself. Recently, fan-out WLPs have emerged. Fan-out WLP starts with the reconstitution or reconfiguration of individual dies to an artificial molded wafer. Fan-out WLPs eliminate the need of expensive substrate as in flip-chip packages, while expanding the WLP size with molding compound for higher I/O applications without compromising on the board level reliability. Essentially, WLP enables the next generation of portable electronics at a competitive price. Many future products using through-silicon-via (TSV) technology will be packaged as WLPs. There have been relatively few publications focused on the latest results of WLP development and research. Many design guidelines, such as material selection and geometry dimensions of under bump metallurgy (UBM), RDL, passivation and solder alloy, for optimum board level reliability performance of WLPs, are still based on technical know-how gained from flip-chip or wire bonding BGA reliability studies published in the past two decades. However, WLPs have their unique product requirements for design guidelines, process conditions, material selection, reliability tests, and failure analysis. In addition, WLP is also an enabling technology for 3D package and system-in-package (SIP), justifying significant research attention. The timing is therefore ripe for this edition to summarize the state-of-the-art research advances in wafer level packaging in various fields of interest. Integration of WLP in 3D packages with TSV or wireless proximity communication (PxC), as well as applications in Microelectromechanical Systems (MEMS) packaging and power packaging, will be highlighted in this issue. In addition, the stateof-the-art simulation is applied to design for enhanced package and board level reliability of WLPs, including thermal cycling test,", "title": "" }, { "docid": "9fe531efea8a42f4fff1fe0465493223", "text": "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolutional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.", "title": "" }, { "docid": "e07198de4fe8ea55f2c04ba5b6e9423a", "text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.", "title": "" }, { "docid": "dbcef163643232313207cd45402158de", "text": "Every industry has significant data output as a product of their working process, and with the recent advent of big data mining and integrated data warehousing it is the case for a robust methodology for assessing the quality for sustainable and consistent processing. In this paper a review is conducted on Data Quality (DQ) in multiple domains in order to propose connections between their methodologies. This critical review suggests that within the process of DQ assessment of heterogeneous data sets, not often are they treated as separate types of data in need of an alternate data quality assessment framework. We discuss the need for such a directed DQ framework and the opportunities that are foreseen in this research area and propose to address it through degrees of heterogeneity.", "title": "" }, { "docid": "fe687739626916780ff22d95cf89f758", "text": "In this paper, we address the problem of jointly summarizing large sets of Flickr images and YouTube videos. Starting from the intuition that the characteristics of the two media types are different yet complementary, we develop a fast and easily-parallelizable approach for creating not only high-quality video summaries but also novel structural summaries of online images as storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in a form of a branching network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with assistance of videos. For evaluation, we collect the datasets of 20 outdoor activities, consisting of 2.7M Flickr images and 16K YouTube videos. Due to the large-scale nature of our problem, we evaluate our algorithm via crowdsourcing using Amazon Mechanical Turk. In our experiments, we demonstrate that the proposed joint summarization approach outperforms other baselines and our own methods using videos or images only.", "title": "" }, { "docid": "6ae289d7da3e923c1288f39fd7a162f6", "text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.", "title": "" } ]
scidocsrr
e4e5cd44c838d50c69a0af61f354c541
Detection of Browser Fingerprinting by Static JavaScript Code Classification
[ { "docid": "6f045c9f48ce87f6b425ac6c5f5d5e9d", "text": "In the modern web, the browser has emerged as the vehicle of choice, which users are to trust, customize, and use, to access a wealth of information and online services. However, recent studies show that the browser can also be used to invisibly fingerprint the user: a practice that may have serious privacy and security implications.\n In this paper, we report on the design, implementation and deployment of FPDetective, a framework for the detection and analysis of web-based fingerprinters. Instead of relying on information about known fingerprinters or third-party-tracking blacklists, FPDetective focuses on the detection of the fingerprinting itself. By applying our framework with a focus on font detection practices, we were able to conduct a large scale analysis of the million most popular websites of the Internet, and discovered that the adoption of fingerprinting is much higher than previous studies had estimated. Moreover, we analyze two countermeasures that have been proposed to defend against fingerprinting and find weaknesses in them that might be exploited to bypass their protection. Finally, based on our findings, we discuss the current understanding of fingerprinting and how it is related to Personally Identifiable Information, showing that there needs to be a change in the way users, companies and legislators engage with fingerprinting.", "title": "" } ]
[ { "docid": "96b47f766be916548226abac36b8f318", "text": "Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network’s ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes.", "title": "" }, { "docid": "7dde24346f2df846b9dbbe45cd9a99d6", "text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.", "title": "" }, { "docid": "d763cefd5d584405e1a6c8e32c371c0c", "text": "Abstract: Whole world and administrators of Educational institutions’ in our country are concerned about regularity of student attendance. Student’s overall academic performance is affected by the student’s present in his institute. Mainly there are two conventional methods for attendance taking and they are by calling student nams or by taking student sign on paper. They both were more time consuming and inefficient. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance of presence. The paper reviews various computerized attendance management system. In this paper basic problem of student attendance management is defined which is traditionally taken manually by faculty. One alternative to make student attendance system automatic is provided by Computer Vision. In this paper we review the various computerized system which is being developed by using different techniques. Based on this review a new approach for student attendance recording and management is proposed to be used for various colleges or academic institutes.", "title": "" }, { "docid": "7afe5c6affbaf30b4af03f87a018a5b3", "text": "Sentiment analysis deals with identifying polarity orientation embedded in users' comments and reviews. It aims at discriminating positive reviews from negative ones. Sentiment is related to culture and language morphology. In this paper, we investigate the effects of language morphology on sentiment analysis in reviews written in the Arabic language. In particular, we investigate, in details, how negation affects sentiments. We also define a set of rules that capture the morphology of negations in Arabic. These rules are then used to detect sentiment taking care of negated words. Experimentations prove that our suggested approach is superior to several existing methods that deal with sentiment detection in Arabic reviews.", "title": "" }, { "docid": "cbc0e3dff1d86d88c416b1119fd3da82", "text": "One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, ar X iv :1 71 2. 02 05 2v 1 [ cs .R O ] 6 D ec 2 01 7 and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.", "title": "" }, { "docid": "2bb31e4565edc858453af69296a67ee6", "text": "OBJECTIVES\nNetworks of franchised health establishments, providing a standardized set of services, are being implemented in developing countries. This article examines associations between franchise membership and family planning and reproductive health outcomes for both the member provider and the client.\n\n\nMETHODS\nRegression models are fitted examining associations between franchise membership and family planning and reproductive health outcomes at the service provider and client levels in three settings.\n\n\nRESULTS\nFranchising has a positive association with both general and family planning client volumes, and the number of family planning brands available. Similar associations with franchise membership are not found for reproductive health service outcomes. In some settings, client satisfaction is higher at franchised than other types of health establishments, although the association between franchise membership and client outcomes varies across the settings.\n\n\nCONCLUSIONS\nFranchise membership has apparent benefits for both the provider and the client, providing an opportunity to expand access to reproductive health services, although greater attention is needed to shift the focus from family planning to a broader reproductive health context.", "title": "" }, { "docid": "d9f7d78b6e1802a17225db13edd033f6", "text": "The edit distance between two character strings can be defined as the minimum cost of a sequence of editing operations which transforms one string into the other. The operations we admit are deleting, inserting and replacing one symbol at a time, with possibly different costs for each of these operations. The problem of finding the longest common subsequence of two strings is a special case of the problem of computing edit distances. We describe an algorithm for computing the edit distance between two strings of length n and m, n > m, which requires O(n * max( 1, m/log n)) steps whenever the costs of edit operations are integral multiples of a single positive real number and the alphabet for the strings is finite. These conditions are necessary for the algorithm to achieve the time bound.", "title": "" }, { "docid": "0360bfbb47af9e661114ea8d367a166f", "text": "Critical Discourse Analysis (CDA) is discourse analytical research that primarily studies the way social-power abuse and inequality are enacted, reproduced, legitimated, and resisted by text and talk in the social and political context. With such dissident research, critical discourse analysts take an explicit position and thus want to understand, expose, and ultimately challenge social inequality. This is also why CDA may be characterized as a social movement of politically committed discourse analysts. One widespread misunderstanding of CDA is that it is a special method of doing discourse analysis. There is no such method: in CDA all methods of the cross-discipline of discourse studies, as well as other relevant methods in the humanities and social sciences, may be used (Wodak and Meyer 2008; Titscher et al. 2000). To avoid this misunderstanding and to emphasize that many methods and approaches may be used in the critical study of text and talk, we now prefer the more general term critical discourse studies (CDS) for the field of research (van Dijk 2008b). However, since most studies continue to use the well-known abbreviation CDA, this chapter will also continue to use it. As an analytical practice, CDA is not one direction of research among many others in the study of discourse. Rather, it is a critical perspective that may be found in all areas of discourse studies, such as discourse grammar, Conversation Analysis, discourse pragmatics, rhetoric, stylistics, narrative analysis, argumentation analysis, multimodal discourse analysis and social semiotics, sociolinguistics, and ethnography of communication or the psychology of discourse-processing, among others. In other words, CDA is discourse study with an attitude. Some of the tenets of CDA could already be found in the critical theory of the Frankfurt School before World War II (Agger 1992b; Drake 2009; Rasmussen and Swindal 2004). Its current focus on language and discourse was initiated with the", "title": "" }, { "docid": "9a43476b4038e554c28e09bae9140e24", "text": "The success of text-based retrieval motivates us to investigate analogous techniques which can support the querying and browsing of image data. However, images differ significantly from text both syntactically and semantically in their mode of representing and expressing information. Thus, the generalization of information retrieval from the text domain to the image domain is non-trivial. This paper presents a framework for information retrieval in the image domain which supports content-based querying and browsing of images. A critical first step to establishing such a framework is to construct a codebook of \"keywords\" for images which is analogous to the dictionary for text documents. We refer to such \"keywords\" in the image domain as \"keyblocks.\" In this paper, we first present various approaches to generating a codebook containing keyblocks at different resolutions. Then we present a keyblock-based approach to content-based image retrieval. In this approach, each image is encoded as a set of one-dimensional index codes linked to the keyblocks in the codebook, analogous to considering a text document as a linear list of keywords. Generalizing upon text-based information retrieval methods, we then offer various techniques for image-based information retrieval. By comparing the performance of this approach with conventional techniques using color and texture features, we demonstrate the effectiveness of the keyblock-based approach to content-based image retrieval.", "title": "" }, { "docid": "1461157186183f11d7270d89eecd926a", "text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.", "title": "" }, { "docid": "2917b7b1453f9e6386d8f47129b605fb", "text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "title": "" }, { "docid": "ca67fcc762caa19ce3911c266c458098", "text": "A novel microstrip lowpass filter is proposed to achieve an ultra wide stopband with 12th harmonic suppression and extremely sharp skirt characteristics. The transition band is from 1.26 to 1.37 GHz with -3 and -20 dB, respectively. The operating mechanism of the filter is investigated based on proposed equivalent-circuit model, and the role of each section in creating null points is theoretically discussed. An overall good agreement between measured and simulated results is observed.", "title": "" }, { "docid": "70a07b1aedcb26f7f03ffc636b1d84a8", "text": "This paper addresses the problem of scheduling concurrent jobs on clusters where application data is stored on the computing nodes. This setting, in which scheduling computations close to their data is crucial for performance, is increasingly common and arises in systems such as MapReduce, Hadoop, and Dryad as well as many grid-computing environments. We argue that data-intensive computation benefits from a fine-grain resource sharing model that differs from the coarser semi-static resource allocations implemented by most existing cluster computing architectures. The problem of scheduling with locality and fairness constraints has not previously been extensively studied under this resource-sharing model.\n We introduce a powerful and flexible new framework for scheduling concurrent distributed jobs with fine-grain resource sharing. The scheduling problem is mapped to a graph datastructure, where edge weights and capacities encode the competing demands of data locality, fairness, and starvation-freedom, and a standard solver computes the optimal online schedule according to a global cost model. We evaluate our implementation of this framework, which we call Quincy, on a cluster of a few hundred computers using a varied workload of data-and CPU-intensive jobs. We evaluate Quincy against an existing queue-based algorithm and implement several policies for each scheduler, with and without fairness constraints. Quincy gets better fairness when fairness is requested, while substantially improving data locality. The volume of data transferred across the cluster is reduced by up to a factor of 3.9 in our experiments, leading to a throughput increase of up to 40%.", "title": "" }, { "docid": "1968573cf98307276bf0f10037aa3623", "text": "In many imaging applications, the continuous phase information of the measured signal is wrapped to a single period of 2π, resulting in phase ambiguity. In this paper we consider the two-dimensional phase unwrapping problem and propose a Maximum a Posteriori (MAP) framework for estimating the true phase values based on the wrapped phase data. In particular, assuming a joint Gaussian prior on the original phase image, we show that the MAP formulation leads to a binary quadratic minimization problem. The latter can be efficiently solved by semidefinite relaxation (SDR). We compare the performances of our proposed method with the existing L1/L2-norm minimization approaches. The numerical results demonstrate that the SDR approach significantly outperforms the existing phase unwrapping methods.", "title": "" }, { "docid": "e3270182796d7244ef19865ebff581ed", "text": "Hyperscale datacenter providers have struggled to balance the growing need for specialized hardware (efficiency) with the economic benefits of homogeneity (manageability). In this paper we propose a new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications. This Configurable Cloud architecture places a layer of reconfigurable logic (FPGAs) between the network switches and the servers, enabling network flows to be programmably transformed at line rate, enabling acceleration of local applications running on the server, and enabling the FPGAs to communicate directly, at datacenter scale, to harvest remote FPGAs unused by their local servers. We deployed this design over a production server bed, and show how it can be used for both service acceleration (Web search ranking) and network acceleration (encryption of data in transit at high-speeds). This architecture is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication. By coupling to the network plane, direct FPGA-to-FPGA messages can be achieved at comparable latency to previous work, without the secondary network. Additionally, the scale of direct inter-FPGA messaging is much larger. The average round-trip latencies observed in our measurements among 24, 1000, and 250,000 machines are under 3, 9, and 20 microseconds, respectively. The Configurable Cloud architecture has been deployed at hyperscale in Microsoft's production datacenters worldwide.", "title": "" }, { "docid": "fe383fbca6d67d968807fb3b23489ad1", "text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.", "title": "" }, { "docid": "59d57e31357eb72464607e89ba4ba265", "text": "Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community. Wp 1 http://www.pds.ewi.tudelft.nl/∼iosup/ S. Ostermann et al. Wp Early Cloud Computing EvaluationWp PDS", "title": "" }, { "docid": "9b69254f90c28e0256fdfbefc608c034", "text": "Multiple-station shared-use vehicle systems allow users to travel between different activity centers and are well suited for resort communities, recreational areas, as well as university and corporate campuses. In this type of shared-use vehicle system, trips are more likely to be oneway each time, differing from other shared-use vehicle system models such as neighborhood carsharing and station cars where round-trips are more prevalent. Although convenient to users, a multiple-station system can suffer from a vehicle distribution problem. As vehicles are used throughout the day, they may become disproportionally distributed among the stations. As a result, it is necessary on occasion to relocate vehicles from one station to another. Relocations can be performed by system staff, which can be cumbersome and costly. In order to alleviate the distribution problem and reduce the number or relocations, we introduce two user-based relocation mechanisms called trip joining (or ridesharing) or trip splitting. When the system realizes that it is becoming imbalanced, it urges users that have more than one passenger to take separate vehicles when more vehicles are needed at the destination station (trip splitting). Conversely, if two users are at the origin station at the same time traveling to the same destination, the system can urge them to rideshare (trip joining). We have implemented this concept both on a real-world university campus shared vehicle system and in a high-fidelity computer simulation model. The model results show that there can be as much as a 42% reduction in the number of relocations using these techniques.", "title": "" }, { "docid": "891efd54485c7cf73edd690e0d9b3cfa", "text": "Quantitative-diffusion-tensor MRI consists of deriving and displaying parameters that resemble histological or physiological stains, i.e., that characterize intrinsic features of tissue microstructure and microdynamics. Specifically, these parameters are objective, and insensitive to the choice of laboratory coordinate system. Here, these two properties are used to derive intravoxel measures of diffusion isotropy and the degree of diffusion anisotropy, as well as intervoxel measures of structural similarity, and fiber-tract organization from the effective diffusion tensor, D, which is estimated in each voxel. First, D is decomposed into its isotropic and anisotropic parts, [D] I and D - [D] I, respectively (where [D] = Trace(D)/3 is the mean diffusivity, and I is the identity tensor). Then, the tensor (dot) product operator is used to generate a family of new rotationally and translationally invariant quantities. Finally, maps of these quantitative parameters are produced from high-resolution diffusion tensor images (in which D is estimated in each voxel from a series of 2D-FT spin-echo diffusion-weighted images) in living cat brain. Due to the high inherent sensitivity of these parameters to changes in tissue architecture (i.e., macromolecular, cellular, tissue, and organ structure) and in its physiologic state, their potential applications include monitoring structural changes in development, aging, and disease.", "title": "" }, { "docid": "916f6f0942a08501139f6d4d1750816d", "text": "The development of local anesthesia in dentistry has marked the beginning of a new era in terms of pain control. Lignocaine is the most commonly used local anesthetic (LA) agent even though it has a vasodilative effect and needs to be combined with adrenaline. Centbucridine is a non-ester, non amide group LA and has not been comprehensively studied in the dental setting and the objective was to compare it to Lignocaine. This was a randomized study comparing the onset time, duration, depth and cardiovascular parameters between Centbucridine (0.5%) and Lignocaine (2%). The study was conducted in the dental outpatient department at the Government Dental College in India on patients attending for the extraction of lower molars. A total of 198 patients were included and there were no significant differences between the LAs except those who received Centbucridine reported a significantly longer duration of anesthesia compared to those who received Lignocaine. None of the patients reported any side effects. Centbucridine was well tolerated and its substantial duration of anesthesia could be attributed to its chemical compound. Centbucridine can be used for dental procedures and can confidently be used in patients who cannot tolerate Lignocaine or where adrenaline is contraindicated.", "title": "" } ]
scidocsrr
cc2f7f19bfa1b6cc7a99cfcc8e50bbeb
Varying Linguistic Purposes of Emoji in (Twitter) Context
[ { "docid": "911ea52fa57524e002154e2fe276ac44", "text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.", "title": "" }, { "docid": "16a0750449d0c01080740588e73c2a5e", "text": "Emojis are a quickly spreading and rather unknown communication phenomenon which occasionally receives attention in the mainstream press, but lacks the scientific exploration it deserves. This paper is a first attempt at investigating the global distribution of emojis. We perform our analysis of the spatial distribution of emojis on a dataset of ∼17 million (and growing) geo-encoded tweets containing emojis by running a cluster analysis over countries represented as emoji distributions and performing correlation analysis of emoji distributions and World Development Indicators. We show that emoji usage tends to draw quite a realistic picture of the living conditions in various parts of our world.", "title": "" } ]
[ { "docid": "6adb3d2e49fa54679c4fb133a992b4f7", "text": "Kathleen McKeown1, Hal Daume III2, Snigdha Chaturvedi2, John Paparrizos1, Kapil Thadani1, Pablo Barrio1, Or Biran1, Suvarna Bothe1, Michael Collins1, Kenneth R. Fleischmann3, Luis Gravano1, Rahul Jha4, Ben King4, Kevin McInerney5, Taesun Moon6, Arvind Neelakantan8, Diarmuid O’Seaghdha7, Dragomir Radev4, Clay Templeton3, Simone Teufel7 1Columbia University, 2University of Maryland, 3University of Texas at Austin, 4University of Michigan, 5Rutgers University, 6IBM, 7Cambridge University, 8University of Massachusetts at Amherst", "title": "" }, { "docid": "607247339e5bb0299f06db3104deef77", "text": "This paper discusses the advantages of using the ACT-R cognitive architecture over the Prolog programming language for the research and development of a large-scale, functional, cognitively motivated model of natural language analysis. Although Prolog was developed for Natural Language Processing (NLP), it lacks any probabilistic mechanisms for dealing with ambiguity and relies on failure detection and algorithmic backtracking to explore alternative analyses. These mechanisms are problematic for handling ill-formed or unexpected inputs, often resulting in an exploration of the entire search space, which becomes intractable as the complexity and variability of the allowed inputs and corresponding grammar grow. By comparison, ACT-R provides context dependent and probabilistic mechanisms which allow the model to incrementally pursue the best analysis. When combined with a nonmonotonic context accommodation mechanism that supports modest adjustment of the evolving analysis to handle cases where the locally best analysis is not globally preferred, the result is an efficient pseudo-deterministic mechanism that obviates the need for failure detection and backtracking, aligns with our basic understanding of Human Language Processing (HLP) and is scalable to broad coverage. The successful transition of the natural language analysis model from Prolog to ACT-R suggests that a cognitively motivated approach to natural language analysis may also be suitable for achieving a functional capability.", "title": "" }, { "docid": "dce63433a9900b9b4e6d9d420713b38d", "text": "Pathogenic microorganisms must cope with extremely low free-iron concentrations in the host's tissues. Some fungal pathogens rely on secreted haemophores that belong to the Common in Fungal Extracellular Membrane (CFEM) protein family, to extract haem from haemoglobin and to transfer it to the cell's interior, where it can serve as a source of iron. Here we report the first three-dimensional structure of a CFEM protein, the haemophore Csa2 secreted by Candida albicans. The CFEM domain adopts a novel helical-basket fold that consists of six α-helices, and is uniquely stabilized by four disulfide bonds formed by its eight signature cysteines. The planar haem molecule is bound between a flat hydrophobic platform located on top of the helical basket and a peripheral N-terminal ‘handle’ extension. Exceptionally, an aspartic residue serves as the CFEM axial ligand, and so confers coordination of Fe3+ haem, but not of Fe2+ haem. Histidine substitution mutants of this conserved Asp acquired Fe2+ haem binding and retained the capacity to extract haem from haemoglobin. However, His-substituted CFEM proteins were not functional in vivo and showed disturbed haem exchange in vitro, which suggests a role for the oxidation-state-specific Asp coordination in haem acquisition by CFEM proteins.", "title": "" }, { "docid": "2cea3c0621b1ac332a6eb305661c077b", "text": "Testing of network protocols and distributed applications has become increasingly complex, as the diversity of networks and underlying technologies increase, and the adaptive behavior of applications becomes more sophisticated. In this paper, we present NIST Net, a tool to facilitate testing and experimentation with network code through emulation. NIST Net enables experimenters to model and effect arbitrary performance dynamics (packet delay, jitter, bandwidth limitations, congestion, packet loss and duplication) on live IP packets passing through a commodity Linux-based PC router. We describe the emulation capabilities of NIST Net; examine its architecture; and discuss some of the implementation challenges encountered in building such a tool to operate at very high network data rates while imposing minimal processing overhead. Calibration results are provided to quantify the fidelity and performance of NIST Net over a wide range of offered loads (up to 1 Gbps), and a diverse set of emulated performance dynamics.", "title": "" }, { "docid": "2897897e683e94b921799e72ebf99b4a", "text": "Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {−1,+1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ∼16× memory saving and ∼6× real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ∼10.5× memory saving and ∼3× real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance.", "title": "" }, { "docid": "34e544af5158850b7119ac4f7c0b7b5e", "text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.", "title": "" }, { "docid": "760edd83045a80dbb2231c0ffbef2ea7", "text": "This paper proposes a method to modify a traditional convolutional neural network (CNN) into an interpretable CNN, in order to clarify knowledge representations in high conv-layers of the CNN. In an interpretable CNN, each filter in a high conv-layer represents a specific object part. Our interpretable CNNs use the same training data as ordinary CNNs without a need for any annotations of object parts or textures for supervision. The interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. We can apply our method to different types of CNNs with various structures. The explicit knowledge representation in an interpretable CNN can help people understand the logic inside a CNN, i.e. what patterns are memorized by the CNN for prediction. Experiments have shown that filters in an interpretable CNN are more semantically meaningful than those in a traditional CNN. The code is available at https://github.com/zqs1022/interpretableCNN.", "title": "" }, { "docid": "84037cd25cb12f6f823da8170a843f75", "text": "This paper presents a topology-based representation dedicated to complex indoor scenes. It accounts for memory management and performances during modelling, visualization and lighting simulation. We propose to enlarge a topological model (called generalized maps) with multipartition and hierarchy. Multipartition allows the user to group objects together according to semantics. Hierarchy provides a coarse-to-fine description of the environment. The topological model we propose has been used for devising a modeller prototype and generating efficient data structure in the context of visualization, global illumination and 1 GHz wave propagation simulation. We presently handle buildings composed of up to one billion triangles.", "title": "" }, { "docid": "211b2a146aba4161aac649551ad613f6", "text": "Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development.", "title": "" }, { "docid": "a51b57427c5204cb38483baa9389091f", "text": "Cross-laminated timber (CLT), a new generation of engineered wood product developed initially in Europe, has been gaining popularity in residential and non-residential applications in several countries. Numerous impressive lowand mid-rise buildings built around the world using CLT showcase the many advantages that this product can offer to the construction sector. This article provides basic information on the various attributes of CLT as a product and as structural system in general, and examples of buildings made of CLT panels. A road map for codes and standards implementation of CLT in North America is included, along with an indication of some of the obstacles that can be expected.", "title": "" }, { "docid": "c69ce70eebe0a3dd89a66b0a9d599019", "text": "In this paper by utilizing the capabilities of modern ubiquitous operating systems we introduce a comprehensive framework for a ubiquitous translation and language learning environment for English to Sanskrit Machine Translation. We present an application for learning Sanskrit characters, sentences and English Sanskrit translation. For the implementation, we have used the open-source Android platform on the Samsung Mini2440, a state-of-the-art development board. We present our current state of implementation, the architecture of our framework,and the findings we have gathered so far. In addition to this, here we describes the Phrase-Based Statistical Machine Translation Decoder for English to Sanskrit translation in ubiquitous environment. Our goal is to improve the translation quality by enhancing the translation table and by preprocessing the Sanskrit language text .", "title": "" }, { "docid": "29f820ea99905ad1ee58eb9d534c89ab", "text": "Basic results in the rigorous theory of weighted dynamical zeta functions or dynamically defined generalized Fredholm determinants are presented. Analytic properties of the zeta functions or determinants are related to statistical properties of the dynamics via spectral properties of dynamical transfer operators, acting on Banach spaces of observables.", "title": "" }, { "docid": "3aa58539c69d6706bc0a9ca0256cdf80", "text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.", "title": "" }, { "docid": "0beec77d16aae48a2679be775f8116b1", "text": "The aim of the study was to compare fertility potential in patients who had been operated upon in childhood because of unilateral or bilateral cryptorchidism. The study covered 68 men (age 25–30 years) with a history of unilateral (49) or bilateral orchidopexy (Mandat et al. in Eur J Pediatr Surg 4:94–97, 1994). Fertility potential was estimated with semen analysis (sperm concentration, motility and morphology), testicular volume measurement and hormonal status evaluation [follicle-stimulating hormone (FSH) and inhibin B levels]. Differences were analysed with the nonparametric Mann–Whitney test. The group of subjects with bilateral orchidopexy had significantly decreased sperm concentration (P = 0.047), sperm motility (P = 0.003), inhibin B level (P = 0.036) and testicular volume (P = 0.040), compared to subjects with unilateral orchidopexy. In the group with bilateral orchidopexy, there was a strong negative correlation between inhibin B and FSH levels (P < 0.001, r s = −0.772). Sperm concentration in this group correlated positively with inhibin B level (P = 0.004, r s = 0.627) and negatively with FSH level (P = 0.04, r s = −0.435). The group of subjects with unilateral orchidopexy who had been operated before the age of 8 years had significantly increased inhibin B level (P = 0.006) and testicular volume (P = 0.007) and decreased FSH level (P = 0.01), compared to subjects who had been operated at the age of 8 or later. Men who underwent bilateral orchidopexy in their childhood have appreciably poorer prognosis for fertility compared to men who underwent a unilateral procedure. Our study also confirmed that men who underwent unilateral orchidopexy in their childhood before the age of 8 years have better prognosis for fertility compared to those who were operated later.", "title": "" }, { "docid": "71ac019a7305529bd353ddca8b4573ef", "text": "In this paper we will discuss progress in the area of thread scheduling for multiprocessors, including systems which a re Chip-MultiProcessors (CMP), can perform Simultaneous Mul tiThreading (SMT), and/or support multiple threads to execute in parallel. The reviewed papers approach thread sched uling from the aspects of resource utilization, thread priori ty, Operating System (OS) effects, and interrupts. The metrics used by the discussed papers will be summarized.", "title": "" }, { "docid": "2cebd2fd12160d2a3a541989293f10be", "text": "A compact Vivaldi antenna array printed on thick substrate and fed by a Substrate Integrated Waveguides (SIW) structure has been developed. The antenna array utilizes a compact SIW binary divider to significantly minimize the feed structure insertion losses. The low-loss SIW binary divider has a common novel Grounded Coplanar Waveguide (GCPW) feed to provide a wideband transition to the SIW and to sustain a good input match while preventing higher order modes excitation. The antenna array was designed, fabricated, and thoroughly investigated. Detailed simulations of the antenna and its feed, in addition to its relevant measurements, will be presented in this paper.", "title": "" }, { "docid": "85693811a951a191d573adfe434e9b18", "text": "Diagnosing problems in data centers has always been a challenging problem due to their complexity and heterogeneity. Among recent proposals for addressing this challenge, one promising approach leverages provenance, which provides the fundamental functionality that is needed for performing fault diagnosis and debugging—a way to track direct and indirect causal relationships between system states and their changes. This information is valuable, since it permits system operators to tie observed symptoms of a faults to their potential root causes. However, capturing provenance in a data center is challenging because, at high data rates, it would impose a substantial cost. In this paper, we introduce techniques that can help with this: We show how to reduce the cost of maintaining provenance by leveraging structural similarities for compression, and by offloading expensive but highly parallel operations to hardware. We also discuss our progress towards transforming provenance into compact actionable diagnostic decisions to repair problems caused by misconfigurations and program bugs.", "title": "" }, { "docid": "b2a2fdf56a79c1cb82b8b3a55b9d841d", "text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.", "title": "" }, { "docid": "a94f4add9893057509a8bafeb8ec698b", "text": "Advances in software defined radio (SDR) technology allow unprecedented control on the entire processing chain, allowing modification of each functional block as well as sampling the changes in the input waveform. This article describes a method for uniquely identifying a specific radio among nominally similar devices using a combination of SDR sensing capability and machine learning (ML) techniques. The key benefit of this approach is that ML operates on raw I/Q samples and distinguishes devices using only the transmitter hardware-induced signal modifications that serve as a unique signature for a particular device. No higher-level decoding, feature engineering, or protocol knowledge is needed, further mitigating challenges of ID spoofing and coexistence of multiple protocols in a shared spectrum. The contributions of the article are as follows: (i) The operational blocks in a typical wireless communications processing chain are modified in a simulation study to demonstrate RF impairments, which we exploit. (ii) Using an overthe- air dataset compiled from an experimental testbed of SDRs, an optimized deep convolutional neural network architecture is proposed, and results are quantitatively compared with alternate techniques such as support vector machines and logistic regression. (iii) Research challenges for increasing the robustness of the approach, as well as the parallel processing needs for efficient training, are described. Our work demonstrates up to 90-99 percent experimental accuracy at transmitter- receiver distances varying between 2-50 ft over a noisy, multi-path wireless channel.", "title": "" }, { "docid": "c5d9b3cf2332e06c883dc2f41e0f2ae8", "text": "We assess the reliability of isobaric-tags for relative and absolute quantitation (iTRAQ), based on different types of replicate analyses taking into account technical, experimental, and biological variations. In total, 10 iTRAQ experiments were analyzed across three domains of life involving Saccharomyces cerevisiae KAY446, Sulfolobus solfataricus P2, and Synechocystis sp. PCC 6803. The coverage of protein expression of iTRAQ analysis increases as the variation tolerance increases. In brief, a cutoff point at +/-50% variation (+/-0.50) would yield 88% coverage in quantification based on an analysis of biological replicates. Technical replicate analysis produces a higher coverage level of 95% at a lower cutoff point of +/-30% variation. Experimental or iTRAQ variations exhibit similar behavior as biological variations, which suggest that most of the measurable deviations come from biological variations. These findings underline the importance of replicate analysis as a validation tool and benchmarking technique in protein expression analysis.", "title": "" } ]
scidocsrr
8506cef3444a3ec0076b5956d62bfa3e
Evaluating Visual Aesthetics in Photographic Portraiture
[ { "docid": "c8977fe68b265b735ad4261f5fe1ec25", "text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.", "title": "" }, { "docid": "44ff9580f0ad6321827cf3f391a61151", "text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.", "title": "" } ]
[ { "docid": "2f7990443281ed98189abb65a23b0838", "text": "In recent years, there has been a tendency to correlate the origin of modern culture and language with that of anatomically modern humans. Here we discuss this correlation in the light of results provided by our first hand analysis of ancient and recently discovered relevant archaeological and paleontological material from Africa and Europe. We focus in particular on the evolutionary significance of lithic and bone technology, the emergence of symbolism, Neandertal behavioral patterns, the identification of early mortuary practices, the anatomical evidence for the acquisition of language, the", "title": "" }, { "docid": "d93795318775df2c451eaf8c04a764cf", "text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.", "title": "" }, { "docid": "7d3642cc1714951ccd9ec1928a340d81", "text": "Electrical fuse (eFUSE) has become a popular choice to enable memory redundancy, chip identification and authentication, analog device trimming, and other applications. We will review the evolution and applications of electrical fuse solutions for 180 nm to 45 nm technologies at IBM, and provide some insight into future uses in 32 nm technology and beyond with the eFUSE as a building block for the autonomic chip of the future.", "title": "" }, { "docid": "43ec6774e1352443f41faf8d3780059b", "text": "Cloud computing is currently one of the most hyped information technology fields and it has become one of the fastest growing segments of IT. Cloud computing allows us to scale our servers in magnitude and availability in order to provide services to a greater number of end users. Moreover, adopters of the cloud service model are charged based on a pay-per-use basis of the cloud's server and network resources, aka utility computing. With this model, a conventional DDoS attack on server and network resources is transformed in a cloud environment to a new breed of attack that targets the cloud adopter's economic resource, namely Economic Denial of Sustainability attack (EDoS). In this paper, we advocate a novel solution, named EDoS-Shield, to mitigate the Economic Denial of Sustainability (EDoS) attack in the cloud computing systems. We design a discrete simulation experiment to evaluate its performance and the results show that it is a promising solution to mitigate the EDoS.", "title": "" }, { "docid": "f8082d18f73bee4938ab81633ff02391", "text": "Against the background of Moreno’s “cognitive-affective theory of learning with media” (CATLM) (Moreno, 2006), three papers on cognitive and affective processes in learning with multimedia are discussed in this commentary. The papers provide valuable insights in how cognitive processing and learning results can be affected by constructs such as “situational interest”, “positive emotions”, or “confusion”, and they suggest questions for further research in this field. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e94cc8dbf257878ea9b78eceb990cb3b", "text": "The past two decades have seen extensive growth of sexual selection research. Theoretical and empirical work has clarified many components of pre- and postcopulatory sexual selection, such as aggressive competition, mate choice, sperm utilization and sexual conflict. Genetic mechanisms of mate choice evolution have been less amenable to empirical testing, but molecular genetic analyses can now be used for incisive experimentation. Here, we highlight some of the currently debated areas in pre- and postcopulatory sexual selection. We identify where new techniques can help estimate the relative roles of the various selection mechanisms that might work together in the evolution of mating preferences and attractive traits, and in sperm-egg interactions.", "title": "" }, { "docid": "c6260516384c43d561610f52ff56aa25", "text": "Successful monetization of user-generated-content (UGC) business calls for attracting enough users, and the right users. The defining characteristic of UGC is users are also content contributors. In this study, we analyze the impact of a UGC firm’s quality control decision on user community composition. We model two UGC firms in competition, with one permitting only high quality content while the other not controlling quality. Users differ in their valuations and the content quality they contribute. Through analyzing various equilibrium situations, we find that higher reward value generally benefits the firm without quality control. However, when the intrinsic value of contribution is low, higher reward value may surprisingly drive high valuation users away from that firm. Also somewhat interestingly, we find that higher cost of contribution may benefit the firm that does not control quality. Our work is among the first to study the business impact of quality control of UGC.", "title": "" }, { "docid": "0bd0af757a365de97db204e8c5b377ca", "text": "Mobile communications are used by more than two thirds of the world population who expect security and privacy guarantees. The 3rd Generation Partnership Project (3GPP) responsible for the worldwide standardization of mobile communication has designed and mandated the use of the AKA protocol to protect the subscribers’ mobile services. Even though privacy was a requirement, numerous subscriber location attacks have been demonstrated against AKA, some of which have been fixed or mitigated in the enhanced AKA protocol designed for 5G. In this paper, we reveal a new privacy attack against all variants of the AKA protocol, including 5G AKA, that breaches subscriber privacy more severely than known location privacy attacks do. Our attack exploits a new logical vulnerability we uncovered that would require dedicated fixes. We demonstrate the practical feasibility of our attack using low cost and widely available setups. Finally we conduct a security analysis of the vulnerability and discuss countermeasures to remedy our attack.", "title": "" }, { "docid": "002bd283bd76ac47f39ea001877b4402", "text": "Low-Power Wide-Area Network (LPWAN) heralds a promising class of technology to overcome the range limits and scalability challenges in traditional wireless sensor networks. Recently proposed Sensor Network over White Spaces (SNOW) technology is particularly attractive due to the availability and advantages of TV spectrum in long-range communication. This paper proposes a new design of SNOW that is asynchronous, reliable, and robust. It represents the first highly scalable LPWAN over TV white spaces to support reliable, asynchronous, bi-directional, and concurrent communication between numerous sensors and a base station. This is achieved through a set of novel techniques. This new design of SNOW has an OFDM based physical layer that adopts robust modulation scheme and allows the base station using a single antenna-radio (1) to send different data to different nodes concurrently and (2) to receive concurrent transmissions made by the sensor nodes asynchronously. It has a lightweight MAC protocol that (1) efficiently implements per-transmission acknowledgments of the asynchronous transmissions by exploiting the adopted OFDM design; (2) combines CSMA/CA and location-aware spectrum allocation for mitigating hidden terminal effects, thus enhancing the flexibility of the nodes in transmitting asynchronously. Hardware experiments through deployments in three radio environments - in a large metropolitan city, in a rural area, and in an indoor environment - as well as large-scale simulations demonstrated that the new SNOW design drastically outperforms other LPWAN technologies in terms of scalability, energy, and latency.", "title": "" }, { "docid": "805ea1349c046008a5efd67382ff82aa", "text": "Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multiagent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents.", "title": "" }, { "docid": "c5f1d5fc5c5161bc9795cdc0362b8ca7", "text": "Bayesian optimization has become a successful tool for optimizing the hyperparameters of machine learning algorithms, such as support vector machines or deep neural networks. Despite its success, for large datasets, training and validating a single configuration often takes hours, days, or even weeks, which limits the achievable performance. To accelerate hyperparameter optimization, we propose a generative model for the validation error as a function of training set size, which is learned during the optimization process and allows exploration of preliminary configurations on small subsets, by extrapolating to the full dataset. We construct a Bayesian optimization procedure, dubbed Fabolas, which models loss and training time as a function of dataset size and automatically trades off high information gain about the global optimum against computational cost. Experiments optimizing support vector machines and deep neural networks show that Fabolas often finds high-quality solutions 10 to 100 times faster than other state-of-the-art Bayesian optimization methods or the recently proposed bandit strategy Hyperband.", "title": "" }, { "docid": "8f9e3bb85b4a2fcff3374fd700ac3261", "text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.", "title": "" }, { "docid": "b858f8c81a282fbb1444ee813f47797a", "text": "In conventional neural networks (NN) based parametric text-tospeech (TTS) synthesis frameworks, text analysis and acoustic modeling are typically processed separately, leading to some limitations. On one hand, much significant human expertise is normally required in text analysis, which presents a laborious task for researchers; on the other hand, training of the NN-based acoustic models still relies on the hidden Markov model (HMM) to obtain frame-level alignments. This acquisition process normally goes through multiple complicated stages. The complex pipeline makes constructing a NN-based parametric TTS system a challenging task. This paper attempts to bypass these limitations using a novel end-to-end parametric TTS synthesis framework, i.e. the text analysis and acoustic modeling are integrated together employing an attention-based recurrent neural network. Thus the alignments can be learned automatically. Preliminary experimental results show that the proposed system can generate moderately smooth spectral parameters and synthesize fairly intelligible speech on short utterances (less than 8 Chinese characters).", "title": "" }, { "docid": "59f2822d69ffb59fafabefa16c57f6c3", "text": "Timely and accurate detection of anomalies in massive data streams have important applications in preventing machine failures, intrusion detection, and dynamic load balancing, etc. In this paper, we introduce a new anomaly detection algorithm, which can detect anomalies in a streaming fashion by making only one pass over the data while utilizing limited storage. The algorithm uses ideas from matrix sketching and randomized low-rank matrix approximations to maintain an approximate low-rank orthogonal basis of the data in a streaming model. Using this constructed orthogonal basis, anomalies in new incoming data are detected based on a simple reconstruction error test. We theoretically prove that our algorithm compares favorably with an offline approach based on global singular value decomposition updates. The experimental results show the effectiveness and efficiency of our approach over other popular fast anomaly detection methods.", "title": "" }, { "docid": "2d8baa9a78e5e20fd20ace55724e2aec", "text": "To determine the relationship between fatigue and post-activation potentiation, we examined the effects of sub-maximal continuous running on neuromuscular function tests, as well as on the squat jump and counter movement jump in endurance athletes. The height of the squat jump and counter movement jump and the estimate of the fast twitch fiber recruiting capabilities were assessed in seven male middle distance runners before and after 40 min of continuous running at an intensity corresponding to the individual lactate threshold. The same test was then repeated after three weeks of specific aerobic training. Since the three variables were strongly correlated, only the estimate of the fast twitch fiber was considered for the results. The subjects showed a significant improvement in the fast twitch fiber recruitment percentage after the 40 min run. Our data show that submaximal physical exercise determined a change in fast twitch muscle fiber recruitment patterns observed when subjects performed vertical jumps; however, this recruitment capacity was proportional to the subjects' individual fast twitch muscle fiber profiles measured before the 40 min run. The results of the jump tests did not change significantly after the three-week training period. These results suggest that pre-fatigue methods, through sub-maximal exercises, could be used to take advantage of explosive capacity in middle-distance runners.", "title": "" }, { "docid": "08084de7a702b87bd8ffc1d36dbf67ea", "text": "In recent years, the mobile data traffic is increasing and many more frequency bands have been employed in cellular handsets. A simple π type tunable band elimination filter (BEF) with switching function has been developed using a wideband tunable surface acoustic wave (SAW) resonator circuit. The frequency of BEF is tuned approximately 31% by variable capacitors without spurious. In LTE low band, the arrangement of TX and RX frequencies is to be reversed in Band 13, 14 and 20 compared with the other bands. The steep edge slopes of the developed filter can be exchanged according to the resonance condition and switching. With combining the TX and RX tunable BEFs and the small sized broadband circulator, a new tunable duplexer has been fabricated, and its TX-RX isolation is proved to be more than 50dB in LTE low band operations.", "title": "" }, { "docid": "adc9e237e2ca2467a85f54011b688378", "text": "Quadrotors are rapidly emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take off and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, four separate aerodynamic effects are investigated as they pertain to quadrotor flight. The effects result from either translational or vertical vehicular velocity components, and cause both moments that affect attitude control and thrust variation that affects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results have enabled improved controller tracking throughout the flight envelope, including at higher speeds and in gusting winds.", "title": "" }, { "docid": "a497cb84141c7db35cd9a835b11f33d2", "text": "Ubiquitous nature of online social media and ever expending usage of short text messages becomes a potential source of crowd wisdom extraction especially in terms of sentiments therefore sentiment classification and analysis is a significant task of current research purview. Major challenge in this area is to tame the data in terms of noise, relevance, emoticons, folksonomies and slangs. This works is an effort to see the effect of pre-processing on twitter data for the fortification of sentiment classification especially in terms of slang word. The proposed method of pre-processing relies on the bindings of slang words on other coexisting words to check the significance and sentiment translation of the slang word. We have used n-gram to find the bindings and conditional random fields to check the significance of slang word. Experiments were carried out to observe the effect of proposed method on sentiment classification which clearly indicates the improvements in accuracy of classification. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of organizing committee of the Twelfth International Multi-Conference on Information Processing-2016 (IMCIP-2016).", "title": "" }, { "docid": "8255146164ff42f8755d8e74fd24cfa1", "text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.", "title": "" }, { "docid": "6d0aba91efbe627d8d98c7f49c34fe3d", "text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.", "title": "" } ]
scidocsrr
1af5ed1db3078377a7ff709f07805425
Automated Correction for Syntax Errors in Programming Assignments using Recurrent Neural Networks
[ { "docid": "598fd1fc1d1d6cba7a838c17efe9481b", "text": "The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new “lens” for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.", "title": "" }, { "docid": "9b942a1342eb3c4fd2b528601fa42522", "text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.", "title": "" } ]
[ { "docid": "ef1ca66424bbf52e5029d1599eb02e39", "text": "The pathogenesis of bacterial vaginosis remains largely elusive, although some microorganisms, including Gardnerella vaginalis, are suspected of playing a role in the etiology of this disorder. Recently culture-independent analysis of microbial ecosystems has proven its efficacy in characterizing the diversity of bacterial populations. Here, we report on the results obtained by combining culture and PCR-based methods to characterize the normal and disturbed vaginal microflora. A total of 150 vaginal swab samples from healthy women (115 pregnant and 35 non-pregnant) were categorized on the basis of Gram stain of direct smear as grade I (n = 112), grade II (n = 26), grade III (n = 9) or grade IV (n = 3). The composition of the vaginal microbial community of eight of these vaginal swabs (three grade I, two grade II and three grade III), all from non-pregnant women, were studied by culture and by cloning of the 16S rRNA genes obtained after direct amplification. Forty-six cultured isolates were identified by tDNA-PCR, 854 cloned 16S rRNA gene fragments were analysed of which 156 by sequencing, yielding a total of 38 species, including 9 presumptively novel species with at least five species that have not been isolated previously from vaginal samples. Interestingly, cloning revealed that Atopobium vaginae was abundant in four out of the five non-grade I specimens. Finally, species specific PCR for A. vaginae and Gardnerella vaginalis pointed to a statistically significant co-occurrence of both species in the bacterial vaginosis samples. Although historically the literature regarding bacterial vaginosis has largely focused on G. vaginalis in particular, several findings of this study – like the abundance of A. vaginae in disturbed vaginal microflora and the presence of several novel species – indicate that much is to be learned about the composition of the vaginal microflora and its relation to the etiology of BV.", "title": "" }, { "docid": "11775f58f85bc3127a5857214ed20df0", "text": "The immune system can be defined as a complex system that protects the organism against organisms or substances that might cause infection or disease. One of the most fascinating characteristics of the immune system is its capability to recognize and respond to pathogens with significant specificity. Innate and adaptive immune responses are able to recognize for‐ eign structures and trigger different molecular and cellular mechanisms for antigen elimina‐ tion. The immune response is critical to all individuals; therefore numerous changes have taken place during evolution to generate variability and specialization, although the im‐ mune system has conserved some important features over millions of years of evolution that are common for all species. The emergence of new taxonomic categories coincided with the diversification of the immune response. Most notably, the emergence of vertebrates coincid‐ ed with the development of a novel type of immune response. Apparently, vertebrates in‐ herited innate immunity from their invertebrate ancestors [1].", "title": "" }, { "docid": "9970c9a191d9223448d205f0acec6976", "text": "This paper presents the complete development and analysis of a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the antagonistic arrangement of circular and longitudinal muscle groups of Oligochaetes. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure using a nickel titanium (NiTi) coil actuators wrapped in a spiral pattern around the circumference. An enhanced theoretical model of the NiTi coil spring describes the combination of martensite deformation and spring elasticity as a function of geometry. A numerical model of the mesh structures reveals how peristaltic actuation induces robust locomotion and details the deformation by the contraction of circumferential NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of speed. Utilizing additional NiTi coils placed longitudinally, steering capabilities are incorporated. Proprioceptive potentiometers sense segment contraction, which enables the development of closed-loop controllers. Several appropriate control algorithms are designed and experimentally compared based on locomotion speed and energy consumption. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impact during operation. This approach allows a completely soft robotic platform by employing a flexible control unit and energy sources.", "title": "" }, { "docid": "778e431e83adedb8172cdb55d303c0cc", "text": "As digital visualization tools have become more ubiquitous, humanists have adopted many applications such as GIS mapping, graphs, and charts for statistical display that were developed in other disciplines. But, I will argue, such graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the google maps and bar charts generated from spread sheets that they pass as unquestioned representations of “what is.” This is the hallmark of realist models of knowledge and needs to be subjected to a radical critique to return the humanistic tenets of constructedness and interpretation to the fore. Realist approaches depend above all upon an idea that phenomena are observer-independent and can be characterized as data. Data pass themselves off as mere descriptions of a priori conditions. Rendering observation (the act of creating a statistical, empirical, or subjective account or image) as if it were the same as the phenomena observed collapses the critical distance between the phenomenal world and its interpretation, undoing the basis of interpretation on which humanistic knowledge production is based. We know this. But we seem ready and eager to suspend critical judgment in a rush to visualization. At the very least, humanists beginning to play at the intersection of statistics and graphics ought to take a detour through the substantial discussions of the sociology of knowledge and its developed critique of realist models of data gathering. 1 At best, we need to take on the challenge of developing graphical expressions rooted in and appropriate to interpretative activity. Because realist approaches to visualization assume transparency and equivalence, as if the phenomenal world were self-evident and the apprehension of it a mere mechanical task, they are fundamentally at odds with approaches to humanities scholarship premised on constructivist principles. I would argue that even for realist models, those that presume an observer-independent reality available to description, the methods of presenting ambiguity and uncertainty in more nuanced terms would be useful. Some significant progress is being made in visualizing uncertainty in data models for GIS, decision-making, archaeological research and other domains. But an important distinction needs to be clear from the outset: the task of representing ambiguity and uncertainty has to be distinguished from a second task – that of using ambiguity and uncertainty as the basis on which a representation is constructed. This is the difference between putting many kinds of points on a map to show degrees of certainty by shades of color, degrees of crispness, transparency etc., and creating a map whose basic coordinate grid is constructed as an effect of these ambiguities. In the first instance, we have a standard map with a nuanced symbol set. In the second, we create a non-standard map that expresses the constructed-ness of space. Both rely on rethinking our approach to visualization and the assumptions that underpin it.", "title": "" }, { "docid": "384a0a9d9613750892225562cb5ff113", "text": "Large scale, high concurrency, and vast amount of data are important trends for the new generation of website. Node.js becomes popular and successful to build data-intensive web applications. To study and compare the performance of Node.js, Python-Web and PHP, we used benchmark tests and scenario tests. The experimental results yield some valuable performance data, showing that PHP and Python-Web handle much less requests than that of Node.js in a certain time. In conclusion, our results clearly demonstrate that Node.js is quite lightweight and efficient, which is an idea fit for I/O intensive websites among the three, while PHP is only suitable for small and middle scale applications, and Python-Web is developer friendly and good for large web architectures. To the best of our knowledge, this is the first paper to evaluate these Web programming technologies with both objective systematic tests (benchmark) and realistic user behavior tests (scenario), especially taking Node.js as the main topic to discuss.", "title": "" }, { "docid": "905d6847be18d7d200fa4224f4cbb411", "text": "Reliability of Active Front End (AFE) converter can be improved if converter faults can be identified before startup. Startup diagnostics of the LCL filter can be an useful tool to accomplish this. An algorithm based on Fast Fourier Transform (FFT) of the filter step response is shown to be able to detect variation of filter components. This is extended to consider conditions of short circuit and open circuit of filter components. This method can identify failed components in a LCL filter before operating the converter, which otherwise may lead to undesirable operation. Analytical expressions are derived for the frequency spectrum of the LCL filter during step excitation using continuous Fourier Transform and Discrete Fourier Transform (DFT). A finite state machine Finite State Machine (FSM) based algorithm is use to sequence the startup diagnostics before commencing normal operation of the power converter. It is shown that the additional computational resource required to perform the diagnostic algorithm is small when compared with the overall inverter control program. The diagnostic functions can be readily implemented in advanced digital controllers that are available today. The spectral analysis is supported by simulations and experimental results which validate the proposed method.", "title": "" }, { "docid": "5f5cf5235c10fe84e39e6725705a9940", "text": "A fully automatic method for descreening halftone images is presented based on convolutional neural networks with end-to-end learning. Incorporating context level information, the proposed method not only removes halftone artifacts but also synthesizes the fine details lost during halftone. The method consists of two main stages. In the first stage, intrinsic features of the scene are extracted, the low-frequency reconstruction of the image is estimated, and halftone patterns are removed. For the intrinsic features, the edges and object-categories are estimated and fed to the next stage as strong visual and contextual cues. In the second stage, fine details are synthesized on top of the low-frequency output based on an adversarial generative model. In addition, the novel problem of rescreening is addressed, where a natural input image is halftoned so as to be similar to a separately given reference halftone image. To this end, a two-stage convolutional neural network is also presented. Both networks are trained with millions of before-and-after example image pairs of various halftone styles. Qualitative and quantitative evaluations are provided, which demonstrates the effectiveness of the proposed methods.", "title": "" }, { "docid": "6105d4250286a7a90fe20e6b1ec8a6d3", "text": "A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N, e) as long as d < N. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N . This raises the question of how much larger d can be: could the attack work with non-negligible probability for d = N 1/4+ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N )) as soon as δ > 1/4 + . Thus Wiener’s success bound d < N 1/4 for his algorithm is essentially tight. We also obtain a converse result for a natural class of extensions of the Wiener attack, which are guaranteed to succeed even when δ > 1/4. The known attacks in this class (by Verheul and Van Tilborg and Dujella) run in exponential time, so it is natural to ask whether there exists an attack in this class with subexponential run-time. Our second converse result answers this question also in the negative.", "title": "" }, { "docid": "d15dc60ef2fb1e6096a3aba372698fd9", "text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.", "title": "" }, { "docid": "1dbb04e806b1fd2a8be99633807d9f4d", "text": "Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.", "title": "" }, { "docid": "e14c9687e90cb46492441d01a972bf57", "text": "This paper describes our efforts to design a cognitive architecture for object recognition in video. Unlike most efforts in computer vision, our work proposes a Bayesian approach to object recognition in video, using a hierarchical, distributed architecture of dynamic processing elements that learns in a self-organizing way to cluster objects in the video input. A biologically inspired innovation is to implement a top-down pathway across layers in the form of causes, creating effectively a bidirectional processing architecture with feedback. To simplify discrimination, overcomplete representations are utilized. Both inference and parameter learning are performed using empirical priors, while imposing appropriate sparseness constraints. Preliminary results show that the cognitive architecture has features that resemble the functional organization of the early visual cortex. One example showing the use of top-down connections is given to disambiguate a synthetic video from correlated noise.", "title": "" }, { "docid": "6f15684a1ad93edb75d2e865f03ad30a", "text": "Social capital has been identified as crucial to the fostering of resilience in rapidly expanding cities of the Global South. The purpose of this article is to better understand the complexities of urban social interaction and how such interaction can constitute ‘capital’ in achieving urban resilience. A concept analysis was conducted to establish what constitutes social capital, its relevance to vulnerable urban settings and how it can be measured. Social capital is considered to be constituted of three forms of interaction: bonds, bridges and linkages. The characteristics of these forms of interaction may vary according to the social, political, cultural and economic diversity to be found within vulnerable urban settings. A framework is outlined to explore the complex nature of social capital in urban settings. On the basis of an illustrative case study, indicators are established to indicate how culturally specific indicators are required to measure social capital that are sensitive to multiple levels of analysis and the development of a multidimensional framework. The framework outlined ought to be adapted to context and validated by future research.", "title": "" }, { "docid": "37f5fcde86e30359e678ff3f957e3c7e", "text": "A Phase I dose-proportionality study is an essential tool to understand drug pharmacokinetic dose-response relationship in early clinical development. There are a number of different approaches to the assessment of dose proportionality. The confidence interval (CI) criteria approach, a staitistically sound and clinically relevant approach, has been proposed to detect dose-proportionality (Smith, et al. 2000), by which the proportionality is declared if the 90% CI for slope is completely contained within the pre-determined critical interval. This method, enhancing the information from a clinical dose-proportionality study, has gradually drawn attention. However, exact power calculation of dose proportinality studies based on CI criteria poses difficulity for practioners since the methodology was essentailly from two one-sided tests (TOST) procedure for the slope, which should be unit under proportionality. It requires sophisticated numerical integration, and it is not available in statistical software packages. This paper presents a SAS Macro to compute the empirical power for the CI-based dose proportinality studies. The resulting sample sizes and corresponding empirical powers suggest that this approach is powerful in detecting dose-proportionality under commonly used sample sizes for phase I studies.", "title": "" }, { "docid": "186b616c56df44ad55cb39ee63ebe906", "text": "RIPEMD-160 is a fast cryptographic hash function that is tuned towards software implementations on 32-bit architectures. It has evolved from the 256-bit extension of MD4, which was introduced in 1990 by Ron Rivest [20, 21]. Its main design feature are two different and independent parallel chains, the result of which are combined at the end of every application of the compression function. As suggested by its name, RIPEMD-160 offers a 160-bit result. It is intended to provide a high security level for the next 10 years or more. RIPEMD-128 is a faster variant of RIPEMD-160, which provides a 128-bit result. Together with SHA-1, RIPEMD-160 and RIPEMD-128 have been included in the International Standard ISO/IEC 10118-3, the publication of which is expected for late 1997 [17]. The goal of this article is to motivate the existence of RIPEMD160, to explain the main design features and to provide a concise description of the algorithm.", "title": "" }, { "docid": "6fb06fff9f16024cf9ccf9a782bffecd", "text": "In this chapter, we discuss 3D compression techniques for reducing the delays in transmitting triangle meshes over the Internet. We first explain how vertex coordinates, which represent surface samples may be compressed through quantization, prediction, and entropy coding. We then describe how the connectivity, which specifies how the surface interpolates these samples, may be compressed by compactly encoding the parameters of a connectivity-graph construction process and by transmitting the vertices in the order in which they are encountered by this process. The storage of triangle meshes compressed with these techniques is usually reduced to about a byte per triangle. When the exact geometry and connectivity of the mesh are not essential, the triangulated surface may be simplified or retiled. Although simplification techniques and the progressive transmission of refinements may be used as a compression tool, we focus on recently proposed retiling techniques designed specifically to improve 3D compression. They are often able to reduce the total storage, which combines coordinates and connectivity, to half-a-bit per triangle without exceeding a mean square error of 1/10,000 of the diagonal of a box that contains the solid.", "title": "" }, { "docid": "c7cd22329f1acd70cb27c08b71a73383", "text": "The coming century is surely the century of data. A combination of blind faith and serious purpose makes our society invest massively in the collection and processing of data of all kinds, on scales unimaginable until recently. Hyperspectral Imagery, Internet Portals, Financial tick-by-tick data, and DNA Microarrays are just a few of the betterknown sources, feeding data in torrential streams into scientific and business databases worldwide. In traditional statistical data analysis, we think of observations of instances of particular phenomena (e.g. instance ↔ human being), these observations being a vector of values we measured on several variables (e.g. blood pressure, weight, height, ...). In traditional statistical methodology, we assumed many observations and a few, wellchosen variables. The trend today is towards more observations but even more so, to radically larger numbers of variables – voracious, automatic, systematic collection of hyper-informative detail about each observed instance. We are seeing examples where the observations gathered on individual instances are curves, or spectra, or images, or even movies, so that a single observation has dimensions in the thousands or billions, while there are only tens or hundreds of instances available for study. Classical methods are simply not designed to cope with this kind of explosive growth of dimensionality of the observation vector. We can say with complete confidence that in the coming century, high-dimensional data analysis will be a very significant activity, and completely new methods of high-dimensional data analysis will be developed; we just don’t know what they are yet. Mathematicians are ideally prepared for appreciating the abstract issues involved in finding patterns in such high-dimensional data. Two of the most influential principles in the coming century will be principles originally discovered and cultivated by mathematicians: the blessings of dimensionality and the curse of dimensionality. The curse of dimensionality is a phrase used by several subfields in the mathematical sciences; I use it here to refer to the apparent intractability of systematically searching through a high-dimensional space, the apparent intractability of accurately approximating a general high-dimensional function, the apparent intractability of integrating a high-dimensional function. The blessings of dimensionality are less widely noted, but they include the concentration of measure phenomenon (so-called in the geometry of Banach spaces), which means that certain random fluctuations are very well controlled in high dimensions and the success of asymptotic methods, used widely in mathematical statistics and statistical physics, which suggest that statements about very high-dimensional settings may be made where moderate dimensions would be too complicated. There is a large body of interesting work going on in the mathematical sciences, both to attack the curse of dimensionality in specific ways, and to extend the benefits", "title": "" }, { "docid": "fefd1c20391ac59698c80ab9c017bae3", "text": "Compensating changes between a subjects' training and testing session in brain-computer interfacing (BCI) is challenging but of great importance for a robust BCI operation. We show that such changes are very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space. This novel approach to learning from other subjects aims to reduce the adverse effects of common nonstationarities, but does not transfer discriminative information. This is an important conceptual difference to standard multi-subject methods that, e.g., improve the covariance matrix estimation by shrinking it toward the average of other users or construct a global feature space. These methods do not reduces the shift between training and test data and may produce poor results when subjects have very different signal characteristics. In this paper, we compare our approach to two state-of-the-art multi-subject methods on toy data and two datasets of EEG recordings from subjects performing motor imagery. We show that it can not only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.", "title": "" }, { "docid": "f06cf2892c85fc487d50c17a87061a0d", "text": "Decision-making invokes two fundamental axes of control: affect or valence, spanning reward and punishment, and effect or action, spanning invigoration and inhibition. We studied the acquisition of instrumental responding in healthy human volunteers in a task in which we orthogonalized action requirements and outcome valence. Subjects were much more successful in learning active choices in rewarded conditions, and passive choices in punished conditions. Using computational reinforcement-learning models, we teased apart contributions from putatively instrumental and Pavlovian components in the generation of the observed asymmetry during learning. Moreover, using model-based fMRI, we showed that BOLD signals in striatum and substantia nigra/ventral tegmental area (SN/VTA) correlated with instrumentally learnt action values, but with opposite signs for go and no-go choices. Finally, we showed that successful instrumental learning depends on engagement of bilateral inferior frontal gyrus. Our behavioral and computational data showed that instrumental learning is contingent on overcoming inherent and plastic Pavlovian biases, while our neuronal data showed this learning is linked to unique patterns of brain activity in regions implicated in action and inhibition respectively.", "title": "" }, { "docid": "b3be9d730d982c66657eceacb9d4e526", "text": "Ontology Matching aims to find the semantic correspondences between ontologies that belong to a single domain but that have been developed separately. However, there are still some problem areas to be solved, because experts are still needed to supervise the matching processes and an efficient way to reuse the alignments has not yet been found. We propose a novel technique named Reverse Ontology Matching, which aims to find the matching functions that were used in the original process. The use of these functions is very useful for aspects such as modeling behavior from experts, performing matching-by-example, reverse engineering existing ontology matching tools or compressing ontology alignment repositories. Moreover, the results obtained from a widely used benchmark dataset provide evidence of the effectiveness of this approach.", "title": "" }, { "docid": "1fbf8b8ec80e7be388b52d4cbb57dfa8", "text": "Quadcopter also called as quadrotor helicopter, is popular in Unmanned Aerial Vehicles (UAV). They are widely used for variety of applications due to its small size and high stability. In this paper design and development of remote controlled quadcopter using PID (Proportional Integral Derivtive) controller implemented with Ardupilot Mega board is presented. The system consists of IMU (Inertial Measurement Unit) which consists of accelerometer and gyro sensors to determine the system orientation and speed control of four BLDC motors to enable the quadcopter fly in six directions. Simulations analysis of quadcopter is carried out using MATLAB Simulink. Pitch, roll and yaw responses of quadcopter is obtained and PID controller is used to stabilize the system response. Finally the prototype of quadcopter is build PID logic is embedded on it. The working and performance of quadcopter is tested and desired outputs were obtained.", "title": "" } ]
scidocsrr
159e579f88219b6d44608230382acebc
A critical assessment of imbalanced class distribution problem: The case of predicting freshmen student attrition
[ { "docid": "1ac4ac9b112c2554db37de2070d7c2df", "text": "This paper studies empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks. Both oversampling and undersampling are considered. These techniques modify the distribution of the training data such that the costs of the examples are conveyed explicitly by the appearances of the examples. Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified. Moreover, hard-ensemble and soft-ensemble, i.e., the combination of above techniques via hard or soft voting schemes, are also tested. Twenty-one UCl data sets with three types of cost matrices and a real-world cost-sensitive data set are used in the empirical study. The results suggest that cost-sensitive learning with multiclass tasks is more difficult than with two-class tasks, and a higher degree of class imbalance may increase the difficulty. It also reveals that almost all the techniques are effective on two-class tasks, while most are ineffective and even may cause negative effect on multiclass tasks. Overall, threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. The empirical study also suggests that some methods that have been believed to be effective in addressing the class imbalance problem may, in fact, only be effective on learning with imbalanced two-class data sets.", "title": "" }, { "docid": "4eda5bc4f8fa55ae55c69f4233858fc7", "text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.", "title": "" } ]
[ { "docid": "ba93902813caa2fc8cddfbaa5f8b4917", "text": "This paper proposes a technique to utilize the power of chatterbots to serve as interactive Support systems to enterprise applications which aim to address a huge audience. The need for support systems arises due to inability of computer illiterate audience to utilize the services offered by an enterprise application. Setting up customer support centers works well for small-medium sized businesses but for mass applications (here E-Governance Systems) the audience counts almost all a country has as its population, Setting up support center that can afford such load is irrelevant. This paper proposes a solution by using AIML based chatterbots to implement Artificial Support Entity (ASE) to such Applications.", "title": "" }, { "docid": "803190e1d9f58a233b573cb842ff4204", "text": "We designed and tested attractors for computer security dialogs: user-interface modifications used to draw users' attention to the most important information for making decisions. Some of these modifications were purely visual, while others temporarily inhibited potentially-dangerous behaviors to redirect users' attention to salient information. We conducted three between-subjects experiments to test the effectiveness of the attractors.\n In the first two experiments, we sent participants to perform a task on what appeared to be a third-party site that required installation of a browser plugin. We presented them with what appeared to be an installation dialog from their operating system. Participants who saw dialogs that employed inhibitive attractors were significantly less likely than those in the control group to ignore clues that installing this software might be harmful.\n In the third experiment, we attempted to habituate participants to dialogs that they knew were part of the experiment. We used attractors to highlight a field that was of no value during habituation trials and contained critical information after the habituation period. Participants exposed to inhibitive attractors were two to three times more likely to make an informed decision than those in the control condition.", "title": "" }, { "docid": "bd3374fefa94fbb11d344d651c0f55bc", "text": "Extensive study has been conducted in the detection of license plate for the applications in intelligent transportation system (ITS). However, these results are all based on images acquired at a resolution of 640 times 480. In this paper, a new method is proposed to extract license plate from the surveillance video which is shot at lower resolution (320 times 240) as well as degraded by video compression. Morphological operations of bottom-hat and morphology gradient are utilized to detect the LP candidates, and effective schemes are applied to select the correct one. The average rates of correct extraction and false alarms are 96.62% and 1.77%, respectively, based on the experiments using more than four hours of video. The experimental results demonstrate the effectiveness and robustness of the proposed method", "title": "" }, { "docid": "fd5f48aebc8fba354137dadb445846bc", "text": "BACKGROUND\nThe syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research.\n\n\nMETHODS\nWe conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity.\n\n\nRESULTS\nThe Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings.\n\n\nCONCLUSIONS\nThe ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement.", "title": "" }, { "docid": "5d8aaba4da6c6aebf08d241484451ea8", "text": "The lack of a friendly and flexible operational model of landside operations motivated the creation of a new simulation model adaptable to various airport configurations for estimating the time behavior of passenger and baggage flows, the elements’ capacities and the delays in a generic airport terminal. The validation of the model has been conducted by comparison with the results of previous research about the average behavior of the future Athens airport. In the mean time the proposed model provided interesting dynamical results about both passenger and baggage movements in the system.", "title": "" }, { "docid": "59718c2e471dfaf0fb7463a89312813a", "text": "Many large Internet websites are accessed by users anonymously, without requiring registration or logging-in. However, to provide personalized service these sites build anonymous, yet persistent, user models based on repeated user visits. Cookies, issued when a web browser first visits a site, are typically employed to anonymously associate a website visit with a distinct user (web browser). However, users may reset cookies, making such association short-lived and noisy. In this paper we propose a solution to the cookie churn problem: a novel algorithm for grouping similar cookies into clusters that are more persistent than individual cookies. Such clustering could potentially allow more robust estimation of the number of unique visitors of the site over a certain long time period, and also better user modeling which is key to plenty of web applications such as advertising and recommender systems.\n We present a novel method to cluster browser cookies into groups that are likely to belong to the same browser based on a statistical model of browser visitation patterns. We address each step of the clustering as a binary classification problem estimating the probability that two different subsets of cookies belong to the same browser. We observe that our clustering problem is a generalized interval graph coloring problem, and propose a greedy heuristic algorithm for solving it. The scalability of this method allows us to cluster hundreds of millions of browser cookies and provides significant improvements over baselines such as constrained K-means.", "title": "" }, { "docid": "b7d96b6334c1aab6d7496731aaea820e", "text": "Dialogue intent analysis plays an important role for dialogue systems. In this paper,we present a deep hierarchical LSTM model to classify the intent of a dialogue utterance. The model is able to recognize and classify user’s dialogue intent in an efficient way. Moreover, we introduce a memory module to the hierarchical LSTM model, so that our model can utilize more context information to perform classification. We evaluate the two proposed models on a real-world conversational dataset from a Chinese famous e-commerce service. The experimental results show that our proposed model outperforms the baselines.", "title": "" }, { "docid": "4d85bf20a514de0181fb33815d833c55", "text": "STATEMENT OF PROBLEM\nDespite the increasing demand for a digital workflow in the fabrication of indirect restorations, information on the accuracy of the resulting definitive casts is limited.\n\n\nPURPOSE\nThe purpose of this in vitro study was to compare the accuracy of definitive casts produced with digital scans and conventional impressions.\n\n\nMATERIAL AND METHODS\nChamfer preparations were made on the maxillary right canine and second molar of a typodont. Subsequently, 9 conventional impressions were made to produce 9 gypsum casts, and 9 digital scans were made to produce stereolithography additive (SLA) casts from 2 manufacturers: 9 Dreve SLA casts and 9 Scanbiz SLA casts. All casts were then scanned 9 times with an extraoral scanner to produce the reference data set. Trueness was evaluated by superimposing the data sets obtained by scanning the casts with the reference data set. Precision was evaluated by analyzing the deviations among repeated scans. The root mean square (RMS) and percentage of points aligned within the nominal values (±50 μm) of the 3-dimensional analysis were calculated by the software.\n\n\nRESULTS\nGypsum had the best alignment (within 50 μm) with the reference data set (median 95.3%, IQR 16.7) and the least RMS (median 25.8 μm, IQR 14.6), followed by Dreve and Scanbiz. Differences in RMS were observed between gypsum and the SLA casts (P<.001). Within 50 μm, gypsum was superior to Scanbiz (P<.001). Gypsum casts exhibited the highest precision, showing the best alignment (within 50 μm) and the least RMS, followed by Scanbiz and Dreve.\n\n\nCONCLUSIONS\nThis study found that gypsum casts had higher accuracy than SLA casts. Within 50 μm, gypsum casts were better than Scanbiz SLA casts, while gypsum casts and Dreve SLA casts had similar trueness. Significant differences were found among the investigated SLA casts used in the digital workflow.", "title": "" }, { "docid": "be58092e19830b87b5ad73eaf87a528c", "text": "Moving object detection and tracking (D&T) are important initial steps in object recognition, context analysis and indexing processes for visual surveillance systems. It is a big challenge for researchers to make a decision on which D&T algorithm is more suitable for which situation and/or environment and to determine how accurately object D&T (real-time or non-real-time) is made. There is a variety of object D&T algorithms (i.e. methods) and publications on their performance comparison and evaluation via performance metrics. This paper provides a systematic review of these algorithms and performance measures and assesses their effectiveness via metrics.", "title": "" }, { "docid": "307d9742739cbd2ade98c3d3c5d25887", "text": "In this paper, we present a smart US imaging system (SMUS) based on an android-OS smartphone, which can provide maximally optimized efficacy in terms of weight and size in point-of-care diagnostic applications. The proposed SMUS consists of the smartphone (Galaxy S5 LTE-A, Samsung., Korea) and a 16-channel probe system. The probe system contains analog and digital front-ends, which conducts beamforming and mid-processing procedures. Otherwise, the smartphone performs the back-end processing including envelope detection, log compression, 2D image filtering, digital scan conversion, and image display with custom-made graphical user interface (GUI). Note that the probe system and smartphone are interconnected by the USB 3.0 protocol. As a result, the developed SMUS can provide real-time B-mode image with the sufficient frame rate (i.e., 58 fps), battery run-time for point-of-care diagnosis (i.e., 54 min), and 35.0°C of transducer surface temperature during B-mode imaging, which satisfies the temperature standards for the safety and effectiveness of medical electrical equipment, IEC 60601-1 (i.e., 43°C).", "title": "" }, { "docid": "bd131d4f68ac8ef3ad2a1226f026322d", "text": "Keywords: Vehicle fuel economy Eco-driving Human–machine interface Autonomous vehicle Driving simulator analysis a b s t r a c t Motor vehicle powered by regular gasoline is one of major sources of pollutants for local and global environment. The current study developed and validated a new fuel-economy optimization system (FEOS), which receives input from vehicle variables and environment variables (e.g., headway spacing) as input, mathematically computes the optimal acceler-ation/deceleration value with Lagrange multipliers method, and sends the optimal values to drivers via a human-machine interface (HMI) or automatic control systems of autonomous vehicles. FEOS can be used in both free-flow and car-following traffic conditions. An experimental study was conducted to evaluate FEOS. It was found that without sacrificing driver safety, drivers with the aid of FEOS consumed significant less fuel than those without FEOS in all acceleration conditions (22–31% overall gas saving) and the majority of deceleration conditions (12–26% overall gas saving). Compared to relative expensive vehicle engineering system design and improvement, FEOS provides a feasible way to minimize fuel consumptions considering human factors. Applications of the optimal model in the design of both HMI for vehicles with human drivers and autonomous vehicles were discussed. A number of alternatives have been put forward to improve fuel economy of motor vehicles and recently driving behaviors and energy efficient technologies have been seen to offer considerable potential for reducing fuel consumption. Additional while the exploitation of energy efficient technologies may take time to implement and be costly in terms of continuously having to satisfy consumer demands for safety, comfort, space and adequate acceleration and performance encouraging changes in driving behavior can be accomplished relatively quickly. One method to help drivers form appropriate driving behaviors is via the in-vehicle human–machine interface (HMI). For example, van der Voort et al. (2001) develop a fuel-efficiency support tool to present visual advice on optimal gear shifting to maximize fuel economy. Appropriate vehicle pedal operations, however, may contribute more than manual shifting operations to fuel economy (Brundell-Freij and Ericsson, 2005). Further pedal operations are applied for both manual-transmission and automatic-transmission vehicles with human drivers as well as autonomous vehicles, while gear shifting operations are only used for manual-transmission ones. Fuel consumption models have been developed to quantify the relationship between fuel consumption and vehicle characteristics , traffic or road conditions but these, in general, are only able to provide approximate fuel consumption estimates. As the model accuracy …", "title": "" }, { "docid": "2b2c30fa2dc19ef7c16cf951a3805242", "text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.", "title": "" }, { "docid": "953997d170fa1a4aafe643c328802a30", "text": "Recently we have developed a new algorithm, PROVEAN (<u>Pro</u>tein <u>V</u>ariation <u>E</u>ffect <u>An</u>alyzer), for predicting the functional effect of protein sequence variations, including single amino acid substitutions and small insertions and deletions [2]. The prediction is based on the change, caused by a given variation, in the similarity of the query sequence to a set of its related protein sequences. For this prediction, the algorithm is required to compute a semi-global pairwise sequence alignment score between the query sequence and each of the related sequences. Using dynamic programming, it takes O(n · m) time to compute alignment score between the query sequence Q of length n and a related sequence S of length m. Thus given l different variations in Q, in a naive way it would take O(l · n · m) time to compute the alignment scores between each of the variant query sequences and S. In this paper, we present a new approach to efficiently compute the pairwise alignment scores for l variations, which takes O((n + l) · m) time when the length of variations is bounded by a constant. In this approach, we further utilize the solutions of overlapping subproblems, which are already used by dynamic programming approach. Our algorithm has been used to build a new database for precomputed prediction scores for all possible single amino acid substitutions, single amino acid insertions, and up to 10 amino acids deletions in about 91K human proteins (including isoforms), where l becomes very large, that is, l = O(n). The PROVEAN source code and web server are available at http://provean.jcvi.org.", "title": "" }, { "docid": "21fb04bbdf23094a5967661787d1f2de", "text": "We present a practical, stratified autocalibration algorithm with theoretical guarantees of global optimality. Given a projective reconstruction, the first stage of the algorithm upgrades it to affine by estimating the position of the plane at infinity. The plane at infinity is computed by globally minimizing a least squares formulation of the modulus constraints. In the second stage, the algorithm upgrades this affine reconstruction to a metric one by globally minimizing the infinite homography relation to compute the dual image of the absolute conic (DIAC). The positive semidefiniteness of the DIAC is explicitly enforced as part of the optimization process, rather than as a post-processing step. For each stage, we construct and minimize tight convex relaxations of the highly non-convex objective functions in a branch and bound optimization framework. We exploit the problem structure to restrict the search space for the DIAC and the plane at infinity to a small, fixed number of branching dimensions, independent of the number of views. Experimental evidence of the accuracy, speed and scalability of our algorithm is presented on synthetic and real data. MATLAB code for the implementation is made available to the community.", "title": "" }, { "docid": "18c30c601e5f52d5117c04c85f95105b", "text": "Crohn's disease is a relapsing systemic inflammatory disease, mainly affecting the gastrointestinal tract with extraintestinal manifestations and associated immune disorders. Genome wide association studies identified susceptibility loci that--triggered by environmental factors--result in a disturbed innate (ie, disturbed intestinal barrier, Paneth cell dysfunction, endoplasmic reticulum stress, defective unfolded protein response and autophagy, impaired recognition of microbes by pattern recognition receptors, such as nucleotide binding domain and Toll like receptors on dendritic cells and macrophages) and adaptive (ie, imbalance of effector and regulatory T cells and cytokines, migration and retention of leukocytes) immune response towards a diminished diversity of commensal microbiota. We discuss the epidemiology, immunobiology, amd natural history of Crohn's disease; describe new treatment goals and risk stratification of patients; and provide an evidence based rational approach to diagnosis (ie, work-up algorithm, new imaging methods [ie, enhanced endoscopy, ultrasound, MRI and CT] and biomarkers), management, evolving therapeutic targets (ie, integrins, chemokine receptors, cell-based and stem-cell-based therapies), prevention, and surveillance.", "title": "" }, { "docid": "786540fad61e862657b778eb57fe1b24", "text": "OBJECTIVE\nTo compare pharmacokinetics (PK) and pharmacodynamics (PD) of insulin glargine in type 2 diabetes mellitus (T2DM) after evening versus morning administration.\n\n\nRESEARCH DESIGN AND METHODS\nTen T2DM insulin-treated persons were studied during 24-h euglycemic glucose clamp, after glargine injection (0.4 units/kg s.c.), either in the evening (2200 h) or the morning (1000 h).\n\n\nRESULTS\nThe 24-h glucose infusion rate area under the curve (AUC0-24h) was similar in the evening and morning studies (1,058 ± 571 and 995 ± 691 mg/kg × 24 h, P = 0.503), but the first 12 h (AUC0-12h) was lower with evening versus morning glargine (357 ± 244 vs. 593 ± 374 mg/kg × 12 h, P = 0.004), whereas the opposite occurred for the second 12 h (AUC12-24h 700 ± 396 vs. 403 ± 343 mg/kg × 24 h, P = 0.002). The glucose infusion rate differences were totally accounted for by different rates of endogenous glucose production, not utilization. Plasma insulin and C-peptide levels did not differ in evening versus morning studies. Plasma glucagon levels (AUC0-24h 1,533 ± 656 vs. 1,120 ± 344 ng/L/h, P = 0.027) and lipolysis (free fatty acid AUC0-24h 7.5 ± 1.6 vs. 8.9 ± 1.9 mmol/L/h, P = 0.005; β-OH-butyrate AUC0-24h 6.8 ± 4.7 vs. 17.0 ± 11.9 mmol/L/h, P = 0.005; glycerol, P < 0.020) were overall more suppressed after evening versus morning glargine administration.\n\n\nCONCLUSIONS\nThe PD of insulin glargine differs depending on time of administration. With morning administration insulin activity is greater in the first 0-12 h, while with evening administration the activity is greater in the 12-24 h period following dosing. However, glargine PK and plasma C-peptide levels were similar, as well as glargine PD when analyzed by 24-h clock time independent of the time of administration. Thus, the results reflect the impact of circadian changes in insulin sensitivity in T2DM (lower in the night-early morning vs. afternoon hours) rather than glargine per se.", "title": "" }, { "docid": "7fece61e99d0b461b04bcf0dfa81639d", "text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.", "title": "" }, { "docid": "0e02a468a65909b93d3876f30a247ab1", "text": "Implant therapy can lead to peri-implantitis, and none of the methods used to treat this inflammatory response have been predictably effective. It is nearly impossible to treat infected surfaces such as TiUnite (a titanium oxide layer) that promote osteoinduction, but finding an effective way to do so is essential. Experiments were conducted to determine the optimum irradiation power for stripping away the contaminated titanium oxide layer with Er:YAG laser irradiation, the degree of implant heating as a result of Er:YAG laser irradiation, and whether osseointegration was possible after Er:YAG laser microexplosions were used to strip a layer from the surface of implants placed in beagle dogs. The Er:YAG laser was effective at removing an even layer of titanium oxide, and the use of water spray limited heating of the irradiated implant, thus protecting the surrounding bone tissue from heat damage.", "title": "" }, { "docid": "d2e25c512717399fdace99bf640c8843", "text": "Credit card is one of the electronic payment mode and the fraud is committing of use of credit card in a fraudulent way either using credit or debit card. The purpose can be solved by purchasing the accessories without paying and giving unauthorized way of payment from account. In this paper, we are proposing the algorithm with the combined approach of hidden markov model and gentic algorithm of the data mining techniques and get the better result with respective of the individual approaches. Furthermore, there performance in term of precision, recall, F-measure has also increased comparative to the the state-of-the-art papers included in", "title": "" } ]
scidocsrr
ca9b76b73525ec2ae6144b049ddb873e
A New Lane Line Segmentation and Detection Method based on Inverse Perspective Mapping
[ { "docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a", "text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" } ]
[ { "docid": "e3ccebbfb328e525c298816950d135a5", "text": "It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions; whereas collecting positive data can be automated and done safely based on the robot’s own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as ”GoNoGo” costmaps) for robot path planning using visual data only.", "title": "" }, { "docid": "5e5c2619ea525ef77cbdaabb6a21366f", "text": "Data profiling is an information analysis technique on data stored inside database. Data profiling purpose is to ensure data quality by detecting whether the data in the data source compiles with the established business rules. Profiling could be performed using multiple analysis techniques depending on the data element to be analyzed. The analysis process also influenced by the data profiling tool being used. This paper describes tehniques of profiling analysis using open-source tool OpenRefine. The method used in this paper is case study method, using data retrieved from BPOM Agency website for checking commodity traditional medicine permits. Data attributes that became the main concern of this paper is Nomor Ijin Edar (NIE / distribution permit number) and registrar company name. The result of this research were suggestions to improve data quality on NIE and company name, which consists of data cleansing and improvement to business process and applications.", "title": "" }, { "docid": "d9d68377bb73d7abca39455b49abe8b7", "text": "A boosting-based method of learning a feed-forward artificial neural network (ANN) with a single layer of hidden neurons and a single output neuron is presented. Initially, an algorithm called Boostron is described that learns a single-layer perceptron using AdaBoost and decision stumps. It is then extended to learn weights of a neural network with a single hidden layer of linear neurons. Finally, a novel method is introduced to incorporate non-linear activation functions in artificial neural network learning. The proposed method uses series representation to approximate non-linearity of activation functions, learns the coefficients of nonlinear terms by AdaBoost. It adapts the network parameters by a layer-wise iterative traversal of neurons and an appropriate reduction of the problem. A detailed performances comparison of various neural network models learned the proposed methods and those learned using the Least Mean Squared learning (LMS) and the resilient back-propagation (RPROP) is provided in this paper. Several favorable results are reported for 17 synthetic and real-world datasets with different degrees of difficulties for both binary and multi-class problems. Email addresses: mubasher.baig@nu.edu.pk, awais@lums.edu.pk (Mirza M. Baig, Mian. M. Awais), alfy@kfupm.edu.sa (El-Sayed M. El-Alfy) Preprint submitted to Neurocomputing March 9, 2017", "title": "" }, { "docid": "9da1449675af42a2fc75ba8259d22525", "text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud", "title": "" }, { "docid": "a8670bebe828e07111f962d72c5909aa", "text": "Personalities are general properties of humans and other animals. Different personality traits are phenotypically correlated, and heritabilities of personality traits have been reported in humans and various animals. In great tits, consistent heritable differences have been found in relation to exploration, which is correlated with various other personality traits. In this paper, we investigate whether or not risk-taking behaviour is part of these avian personalities. We found that (i) risk-taking behaviour is repeatable and correlated with exploratory behaviour in wild-caught hand-reared birds, (ii) in a bi-directional selection experiment on 'fast' and 'slow' early exploratory behaviour, bird lines tend to differ in risk-taking behaviour, and (iii) within-nest variation of risk-taking behaviour is smaller than between-nest variation. To show that risk-taking behaviour has a genetic component in a natural bird population, we bred great tits in the laboratory and artificially selected 'high' and 'low' risk-taking behaviour for two generations. Here, we report a realized heritability of 19.3 +/- 3.3% (s.e.m.) for risk-taking behaviour. With these results we show in several ways that risk-taking behaviour is linked to exploratory behaviour, and we therefore have evidence for the existence of avian personalities. Moreover, we prove that there is heritable variation in more than one correlated personality trait in a natural population, which demonstrates the potential for correlated evolution.", "title": "" }, { "docid": "9aa21d2b6ea52e3e1bdd3e2795d1bf03", "text": "Dining cryptographers networks (or DC-nets) are a privacypreserving primitive devised by Chaum for anonymous message publication. A very attractive feature of the basic DC-net is its non-interactivity. Subsequent to key establishment, players may publish their messages in a single broadcast round, with no player-to-player communication. This feature is not possible in other privacy-preserving tools like mixnets. A drawback to DC-nets, however, is that malicious players can easily jam them, i.e., corrupt or block the transmission of messages from honest parties, and may do so without being traced. Several researchers have proposed valuable methods of detecting cheating players in DC-nets. This is usually at the cost, however, of multiple broadcast rounds, even in the optimistic case, and often of high computational and/or communications overhead, particularly for fault recovery. We present new DC-net constructions that simultaneously achieve noninteractivity and high-probability detection and identification of cheating players. Our proposals are quite efficient, imposing a basic cost that is linear in the number of participating players. Moreover, even in the case of cheating in our proposed system, just one additional broadcast round suffices for full fault recovery. Among other tools, our constructions employ bilinear maps, a recently popular cryptographic technique for reducing communication complexity.", "title": "" }, { "docid": "be8efe56e56bccf1668faa7b9c0a6e57", "text": "Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.", "title": "" }, { "docid": "133a48a5c6c568d33734bd95d4aec0b2", "text": "The topic information of conversational content is important for continuation with communication, so topic detection and tracking is one of important research. Due to there are many topic transform occurring frequently in long time communication, and the conversation maybe have many topics, so it's important to detect different topics in conversational content. This paper detects topic information by using agglomerative clustering of utterances and Dynamic Latent Dirichlet Allocation topic model, uses proportion of verb and noun to analyze similarity between utterances and cluster all utterances in conversational content by agglomerative clustering algorithm. The topic structure of conversational content is friability, so we use speech act information and gets the hypernym information by E-HowNet that obtains robustness of word categories. Latent Dirichlet Allocation topic model is used to detect topic in file units, it just can detect only one topic if uses it in conversational content, because of there are many topics in conversational content frequently, and also uses speech act information and hypernym information to train the latent Dirichlet allocation models, then uses trained models to detect different topic information in conversational content. For evaluating the proposed method, support vector machine is developed for comparison. According to the experimental results, we can find the proposed method outperforms the approach based on support vector machine in topic detection and tracking in spoken dialogue.", "title": "" }, { "docid": "09985252933e82cf1615dabcf1e6d9a2", "text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.", "title": "" }, { "docid": "f55ac9e319ad8b9782a34251007a5d06", "text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "title": "" }, { "docid": "b3fc899c49ceb699f62b43bb0808a1b2", "text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.", "title": "" }, { "docid": "c0b000176bba658ef702872f0174b602", "text": "Distributed Denial of Service (DDoS) attacks represent a major threat to uninterrupted and efficient Internet service. In this paper, we empirically evaluate several major information metrics, namely, Hartley entropy, Shannon entropy, Renyi’s entropy, generalized entropy, Kullback–Leibler divergence and generalized information distance measure in their ability to detect both low-rate and high-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic data and an appropriate metric facilitates building an effective model to detect both low-rate and high-rate DDoS attacks. We use MIT Lincoln Laboratory, CAIDA and TUIDS DDoS datasets to illustrate the efficiency and effectiveness of each metric for DDoS detection. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a673945eaa9b5a350f7d7421c45ac238", "text": "The intention of this study was to identify the bacterial pathogens infecting Oreochromis niloticus (Nile tilapia) and Clarias gariepinus (African catfish), and to establish the antibiotic susceptibility of fish bacteria in Uganda. A total of 288 fish samples from 40 fish farms (ponds, cages, and tanks) and 8 wild water sites were aseptically collected and bacteria isolated from the head kidney, liver, brain and spleen. The isolates were identified by their morphological characteristics, conventional biochemical tests and Analytical Profile Index test kits. Antibiotic susceptibility of selected bacteria was determined by the Kirby-Bauer disc diffusion method. The following well-known fish pathogens were identified at a farm prevalence of; Aeromonas hydrophila (43.8%), Aeromonas sobria (20.8%), Edwardsiella tarda (8.3%), Flavobacterium spp. (4.2%) and Streptococcus spp. (6.3%). Other bacteria with varying significance as fish pathogens were also identified including Plesiomonas shigelloides (25.0%), Chryseobacterium indoligenes (12.5%), Pseudomonas fluorescens (10.4%), Pseudomonas aeruginosa (4.2%), Pseudomonas stutzeri (2.1%), Vibrio cholerae (10.4%), Proteus spp. (6.3%), Citrobacter spp. (4.2%), Klebsiella spp. (4.2%) Serratia marcescens (4.2%), Burkholderia cepacia (2.1%), Comamonas testosteroni (8.3%) and Ralstonia picketti (2.1%). Aeromonas spp., Edwardsiella tarda and Streptococcus spp. were commonly isolated from diseased fish. Aeromonas spp. (n = 82) and Plesiomonas shigelloides (n = 73) were evaluated for antibiotic susceptibility. All isolates tested were susceptible to at-least ten (10) of the fourteen antibiotics evaluated. High levels of resistance were however expressed by all isolates to penicillin, oxacillin and ampicillin. This observed resistance is most probably intrinsic to those bacteria, suggesting minimal levels of acquired antibiotic resistance in fish bacteria from the study area. To our knowledge, this is the first study to establish the occurrence of several bacteria species infecting fish; and to determine antibiotic susceptibility of fish bacteria in Uganda. The current study provides baseline information for future reference and fish disease management in the country.", "title": "" }, { "docid": "b3923d263c230f527f06b85275522f60", "text": "Cloud computing is a relatively new concept that offers the potential to deliver scalable elastic services to many. The notion of pay-per use is attractive and in the current global recession hit economy it offers an economic solution to an organizations’ IT needs. Computer forensics is a relatively new discipline born out of the increasing use of computing and digital storage devices in criminal acts (both traditional and hi-tech). Computer forensic practices have been around for several decades and early applications of their use can be charted back to law enforcement and military investigations some 30 years ago. In the last decade computer forensics has developed in terms of procedures, practices and tool support to serve the law enforcement community. However, it now faces possibly its greatest challenges in dealing with cloud computing. Through this paper we explore these challenges and suggest some possible solutions.", "title": "" }, { "docid": "169ed8d452a7d0dd9ecf90b9d0e4a828", "text": "Technology is common in the domain of knowledge distribution, but it rarely enhances the process of knowledge use. Distribution delivers knowledge to the potential user's desktop but cannot dictate what he or she does with it thereafter. It would be interesting to envision technologies that help to manage personal knowledge as it applies to decisions and actions. The viewpoints about knowledge vary from individual, community, society, personnel development or national development. Personal Knowledge Management (PKM) integrates Personal Information Management (PIM), focused on individual skills, with Knowledge Management (KM). KM Software is a subset of Enterprise content management software and which contains a range of software that specialises in the way information is collected, stored and/or accessed. This article focuses on KM skills, PKM and PIM Open Sources Software, Social Personal Management and also highlights the Comparison of knowledge base management software and its use.", "title": "" }, { "docid": "7095bf529a060dd0cd7eeb2910998cf8", "text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable", "title": "" }, { "docid": "409d104fa3e992ac72c65b004beaa963", "text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.", "title": "" }, { "docid": "d6bbec8d1426cacba7f8388231f04add", "text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.", "title": "" }, { "docid": "6ec0b302a485b787b3d21b89f79a0110", "text": "This paper draws on primary and secondary data to propose a taxonomy of strategies, or \"schools.\" for knowledge management. The primary purpose of this fratiiework is to guide executives on choices to initiate knowledge tnanagement projects according to goals, organizational character, and technological, behavioral, or economic biases. It may also be useful to teachers in demonstrating the scope of knowledge management and to researchers in generating propositions for further study.", "title": "" }, { "docid": "945bf7690169b5f2e615324fb133bc19", "text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.", "title": "" } ]
scidocsrr
e94dffb1b53c6fd2937ee59e687d511d
Privacy Preserving Data Mining
[ { "docid": "5ed4c23e1fcfb3f18c18bb1eb6f408ab", "text": "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.", "title": "" }, { "docid": "0a968f1dcba70ab1a42c25b1a6ec2a5c", "text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.", "title": "" } ]
[ { "docid": "bb782cfc4528de63c38dfc2165f9c4b4", "text": "Many studies have investigated the smart grid architecture and communication models in the past few years. However, the communication model and architecture for a smart grid still remain unclear. Today's electric power distribution is very complex and maladapted because of the lack of efficient and cost-effective energy generation, distribution, and consumption management systems. A wireless smart grid communication system can playan important role in achieving these goals. In thispaper, we describe a smart grid communication architecture in which we merge customers and distributors into a single domain. In the proposed architecture, all the home area networks, neighborhood area networks, and local electrical equipment form a local wireless mesh network (LWMN). Each device or meter can act as a source, router, or relay. The data generated in any node (device/meter) reaches the data collector via other nodes. The data collector transmits this data via the access point of a wide area network (WAN). Finally, data is transferred to the service provider or to the control center of the smart grid. We propose a wireless cooperative communication model for the LWMN. We deploy a limited number of smart relays to improve the performance of the network. A novel relay selection mechanism is also proposed to reduce the relay selection overhead. Simulation results show that our cooperative smart grid (coopSG) communication model improves the end-to-end packet delivery latency, throughput, and energy efficiency over both the Wang et al. and Niyato et al. models.", "title": "" }, { "docid": "df701752c19f1b0ff56555a89201d0a9", "text": "This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization.", "title": "" }, { "docid": "4d43bf711d1d756c2067369bbc9f8137", "text": "This paper develops a framework for examining the effect of demand uncertainty and forecast error on unit costs and customer service levels in the supply chain, including Material Requirements Planning (MRP) type manufacturing systems. The aim is to overcome the methodological limitations and confusion that has arisen in much earlier research. To illustrate the issues, the problem of estimating the value of improving forecasting accuracy for a manufacturer was simulated. The topic is of practical importance because manufacturers spend large sums of money in purchasing and staffing forecasting support systems to achieve more accurate forecasts. In order to estimate the value a two-level MRP system with lot sizing where the product is manufactured for stock was simulated. Final product demand was generated by two commonly occurring stochastic processes and with different variances. Different levels of forecasting error were then introduced to arrive at corresponding values for improving forecasting accuracy. The quantitative estimates of improved accuracy were found to depend on both the demand generating process and the forecasting method. Within this more complete framework, the substantive results confirm earlier research that the best lot sizing rules for the deterministic situation are the worst whenever there is uncertainty in demand. However, size matters, both in the demand uncertainty and forecasting errors. The quantitative differences depend on service level and also the form of demand uncertainty. Unit costs for a given service level increase exponentially as the uncertainty in the demand data increases. The paper also estimates the effects of mis-specification of different sizes of forecast error in addition to demand uncertainty. In those manufacturing problems with high demand uncertainty and high forecast error, improved forecast accuracy should lead to substantial percentage improvements in unit costs. Methodologically, the results demonstrate the need to simulate demand uncertainty and the forecasting process separately. Journal of the Operational Research Society (2011) 62, 483–500. doi:10.1057/jors.2010.40 Published online 16 June 2010", "title": "" }, { "docid": "78db8b57c3221378847092e5283ad754", "text": "This paper analyzes correlations and causalities between Bitcoin market indicators and Twitter posts containing emotional signals on Bitcoin. Within a timeframe of 104 days (November 23 2013 March 7 2014), about 160,000 Twitter posts containing ”bitcoin” and a positive, negative or uncertainty related term were collected and further analyzed. For instance, the terms ”happy”, ”love”, ”fun”, ”good”, ”bad”, ”sad” and ”unhappy” represent positive and negative emotional signals, while ”hope”, ”fear” and ”worry” are considered as indicators of uncertainty. The static (daily) Pearson correlation results show a significant positive correlation between emotional tweets and the close price, trading volume and intraday price spread of Bitcoin. However, a dynamic Granger causality analysis does not confirm a causal effect of emotional Tweets on Bitcoin market values. To the contrary, the analyzed data shows that a higher Bitcoin trading volume Granger causes more signals of uncertainty within a 24 to 72hour timeframe. This result leads to the interpretation that emotional sentiments rather mirror the market than that they make it predictable. Finally, the conclusion of this paper is that the microblogging platform Twitter is Bitcoins virtual trading floor, emotionally reflecting its trading dynamics.2", "title": "" }, { "docid": "ce17d4ecfe780d5dcc4e2910063c87f5", "text": "Article history: Transgender people face ma Received 14 December 2007 Received in revised form 31 December 2008 Accepted 20 January 2009 Available online 24 January 2009", "title": "" }, { "docid": "b00c6771f355577437dee2cdd63604b8", "text": "A person gets frustrated when he faces slow speed as many devices are connected to the same network. As the number of people accessing wireless internet increases, it’s going to result in clogged airwaves. Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.", "title": "" }, { "docid": "9b06026e998df745d820fbd835554b13", "text": "There have been significant advances in the field of Internet of Things (IoT) recently. At the same time there exists an ever-growing demand for ubiquitous healthcare systems to improve human health and well-being. In most of IoT-based patient monitoring systems, especially at smart homes or hospitals, there exists a bridging point (i.e., gateway) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks. These gateways have beneficial knowledge and constructive control over both the sensor network and the data to be transmitted through the Internet. In this paper, we exploit the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway. By taking responsibility for handling some burdens of the sensor network and a remote healthcare center, a Smart e-Health Gateway can cope with many challenges in ubiquitous healthcare systems such as energy efficiency, scalability, and reliability issues. A successful implementation of Smart e-Health Gateways enables massive deployment of ubiquitous health monitoring systems especially in clinical environments. We also present a case study of a Smart e-Health Gateway called UTGATE where some of the discussed higher-level features have been implemented. Our proof-of-concept design demonstrates an IoT-based health monitoring system with enhanced overall system energy efficiency, performance, interoperability, security, and reliability.", "title": "" }, { "docid": "1ada0fc6b22bba07d9baf4ccab437671", "text": "Tree-based path planners have been shown to be well suited to solve various high dimensional motion planning problems. Here we present a variant of the Rapidly-Exploring Random Tree (RRT) path planning algorithm that is able to explore narrow passages or difficult areas more effectively. We show that both workspace obstacle information and C-space information can be used when deciding which direction to grow. The method includes many ways to grow the tree, some taking into account the obstacles in the environment. This planner works best in difficult areas when planning for free flying rigid or articulated robots. Indeed, whereas the standard RRT can face difficulties planning in a narrow passage, the tree based planner presented here works best in these areas", "title": "" }, { "docid": "2cc97c407494310f500525b938e8aaa4", "text": "OBJECTIVE\nIn this paper, we aim to investigate the effect of computer-aided triage system, which is implemented for the health checkup of lung lesions involving tens of thousands of chest X-rays (CXRs) that are required for diagnosis. Therefore, high accuracy of diagnosis by an automated system can reduce the radiologist's workload on scrutinizing the medical images.\n\n\nMETHOD\nWe present a deep learning model in order to efficiently detect abnormal levels or identify normal levels during mass chest screening so as to obtain the probability confidence of the CXRs. Moreover, a convolutional sparse denoising autoencoder is designed to compute the reconstruction error. We employ four publicly available radiology datasets pertaining to CXRs, analyze their reports, and utilize their images for mining the correct disease level of the CXRs that are to be submitted to a computer aided triaging system. Based on our approach, we vote for the final decision from multi-classifiers to determine which three levels of the images (i.e. normal, abnormal, and uncertain cases) that the CXRs fall into.\n\n\nRESULTS\nWe only deal with the grade diagnosis for physical examination and propose multiple new metric indices. Combining predictors for classification by using the area under a receiver operating characteristic curve, we observe that the final decision is related to the threshold from reconstruction error and the probability value. Our method achieves promising results in terms of precision of 98.7 and 94.3% based on the normal and abnormal cases, respectively.\n\n\nCONCLUSION\nThe results achieved by the proposed framework show superiority in classifying the disease level with high accuracy. This can potentially save the radiologists time and effort, so as to allow them to focus on higher-level risk CXRs.", "title": "" }, { "docid": "727add0c0e44d0044d7f58b3633160d2", "text": "Case II: Deterministic transitions, continuous state Case III: “Mildly” stochastic trans., finite state: P(s,a,s’) ≥ 1 δ Case IV: Bounded-noise stochastic transitions, continuous state: st+1 = T(st, at) + wt , ||wt|| ≤ ∆ Planning and Learning in Environments with Delayed Feedback Thomas J. Walsh, Ali Nouri, Lihong Li, Michael L. Littman Rutgers Laboratory for Real Life Reinforcement Learning Computer Science Department, Rutgers University, Piscataway NJ", "title": "" }, { "docid": "78bc13c6b86ea9a8fda75b66f665c39f", "text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.", "title": "" }, { "docid": "a8c4b84175074e654cf1facfc65bde50", "text": "We propose monotonic classification with selection of monotonic features as a defense against evasion attacks on classifiers for malware detection. The monotonicity property of our classifier ensures that an adversary will not be able to evade the classifier by adding more features. We train and test our classifier on over one million executables collected from VirusTotal. Our secure classifier has 62% temporal detection rate at a 1% false positive rate. In comparison with a regular classifier with unrestricted features, the secure malware classifier results in a drop of approximately 13% in detection rate. Since this degradation in performance is a result of using a classifier that cannot be evaded, we interpret this performance hit as the cost of security in classifying malware.", "title": "" }, { "docid": "3cc84fda5e04ccd36f5b632d9da3a943", "text": "We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data. Using a divide-and-conquer approach to generate inter-slice connectivity, we create a case table that defines triangle topology. The algorithm processes the 3D medical data in scan-line order and calculates triangle vertices using linear interpolation. We find the gradient of the original data, normalize it, and use it as a basis for shading the models. The detail in images produced from the generated surface models is the result of maintaining the inter-slice connectivity, surface data, and gradient information present in the original 3D data. Results from computed tomography (CT), magnetic resonance (MR), and single-photon emission computed tomography (SPECT) illustrate the quality and functionality of marching cubes. We also discuss improvements that decrease processing time and add solid modeling capabilities.", "title": "" }, { "docid": "301715c650ee5f918ddeaf0c18889183", "text": "Keyframe-based Learning from Demonstration has been shown to be an effective method for allowing end-users to teach robots skills. We propose a method for using multiple keyframe demonstrations to learn skills as sequences of positional constraints (c-keyframes) which can be planned between for skill execution. We also introduce an interactive GUI which can be used for displaying the learned c-keyframes to the teacher, for altering aspects of the skill after it has been taught, or for specifying a skill directly without providing kinesthetic demonstrations. We compare 3 methods of teaching c-keyframe skills: kinesthetic teaching, GUI teaching, and kinesthetic teaching followed by GUI editing of the learned skill (K-GUI teaching). Based on user evaluation, the K-GUI method of teaching is found to be the most preferred, and the GUI to be the least preferred. Kinesthetic teaching is also shown to result in more robust constraints than GUI teaching, and several use cases of K-GUI teaching are discussed to show how the GUI can be used to improve the results of kinesthetic teaching.", "title": "" }, { "docid": "53afafd2fc1087989a975675ff4098d8", "text": "The sixth generation of IEEE 802.11 wireless local area networks is under developing in the Task Group 802.11ax. One main physical layer (PHY) novel feature in the IEEE 802.11ax amendment is the specification of orthogonal frequency division multiplexing (OFDM) uplink multi-user multiple-input multiple-output (UL MU-MIMO) techniques. A challenge issue to implement UL MU-MIMO in OFDM PHY is the mitigation of the relative carrier frequency offset (CFO), which can cause intercarrier interference and rotation of the constellation of received symbols, and, consequently, degrading the system performance dramatically if it is not properly mitigated. In this paper, we show that a frequency domain CFO estimation and correction scheme implemented at both transmitter (Tx) and receiver (Rx) coupled with pre-compensation approach at the Tx can decrease the negative effects of the relative CFO.", "title": "" }, { "docid": "c5ccbeec002977a2722f7b1e017112e1", "text": "Distributed processing of real-world graphs is challenging due to their size and the inherent irregular structure of graph computations. We present HipG, a distributed framework that facilitates programming parallel graph algorithms by composing the parallel application automatically from the user-defined pieces of sequential work on graph nodes. To make the user code high-level, the framework provides a unified interface to executing methods on local and non-local graph nodes and an abstraction of exclusive execution. The graph computations are managed by logical objects called synchronizers, which we used, for example, to implement distributed divide-and-conquer decomposition into strongly connected components. The code written in HipG is independent of a particular graph representation, to the point that the graph can be created on-the-fly, i.e. by the algorithm that computes on this graph, which we used to implement a distributed model checker. HipG programs are in general short and elegant; they achieve good portability, memory utilization, and performance.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "4af06d0e333f681a2d9afdb3298b549b", "text": "In this paper we present CRF-net, a CNN-based solution for estimating the camera response function from a single photograph. We follow the recent trend of using synthetic training data, and generate a large set of training pairs based on a small set of radio-metrically linear images and the DoRF database of camera response functions. The resulting CRF-net estimates the parameters of the EMoR camera response model directly from a single photograph. Experimentally, we show that CRF-net is able to accurately recover the camera response function from a single photograph under a wide range of conditions.", "title": "" }, { "docid": "66b7ed8c1d20bceafb0a1a4194cd91e8", "text": "In this paper a novel watermarking scheme for image authentication and recovery is presented. The algorithm can detect modified regions in images and is able to recover a good approximation of the original content of the tampered regions. For this purpose, two different watermarks have been used: a semi-fragile watermark for image authentication and a robust watermark for image recovery, both embedded in the Discrete Wavelet Transform domain. The proposed method achieves good image quality with mean Peak Signal-to-Noise Ratio values of the watermarked images of 42 dB and identifies image tampering of up to 20% of the original image.", "title": "" }, { "docid": "697ae7ff6a0ace541ea0832347ba044f", "text": "The repair of wounds is one of the most complex biological processes that occur during human life. After an injury, multiple biological pathways immediately become activated and are synchronized to respond. In human adults, the wound repair process commonly leads to a non-functioning mass of fibrotic tissue known as a scar. By contrast, early in gestation, injured fetal tissues can be completely recreated, without fibrosis, in a process resembling regeneration. Some organisms, however, retain the ability to regenerate tissue throughout adult life. Knowledge gained from studying such organisms might help to unlock latent regenerative pathways in humans, which would change medical practice as much as the introduction of antibiotics did in the twentieth century.", "title": "" } ]
scidocsrr
5a8898a69b38590d857af0faca5e6947
SEPIC converter based Photovoltaic system with Particle swarm Optimization MPPT
[ { "docid": "470093535d4128efa9839905ab2904a5", "text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.", "title": "" }, { "docid": "180dd2107c6a39e466b3d343fa70174f", "text": "This paper presents simulation and hardware implementation of incremental conductance (IncCond) maximum power point tracking (MPPT) used in solar array power systems with direct control method. The main difference of the proposed system to existing MPPT systems includes elimination of the proportional-integral control loop and investigation of the effect of simplifying the control circuit. Contributions are made in several aspects of the whole system, including converter design, system simulation, controller programming, and experimental setup. The resultant system is capable of tracking MPPs accurately and rapidly without steady-state oscillation, and also, its dynamic performance is satisfactory. The IncCond algorithm is used to track MPPs because it performs precise control under rapidly changing atmospheric conditions. MATLAB and Simulink were employed for simulation studies, and Code Composer Studio v3.1 was used to program a TMS320F2812 digital signal processor. The proposed system was developed and tested successfully on a photovoltaic solar panel in the laboratory. Experimental results indicate the feasibility and improved functionality of the system.", "title": "" }, { "docid": "238c3e34ad2fcb4a4ef9d98aea468bd8", "text": "Performance of Photovoltaic (PV) system is greatly dependent on the solar irradiation and operating temperature. Due to partial shading condition, the characteristics of a PV system considerably change and often exhibit several local maxima with one global maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily be trapped at local maxima under partial shading. This significantly reduced the energy yield of the PV systems. In order to solve this problem, this paper proposes a Maximum Power Point tracking algorithm based on particle swarm optimization (PSO) that is capable of tracking global MPP under partial shaded conditions. The performance of proposed algorithm is evaluated by means of simulation in MATLAB Simulink. The proposed algorithm is applied to a grid connected PV system, in which a Boost (step up) DC-DC converter satisfactorily tracks the global peak.", "title": "" } ]
[ { "docid": "72e5b92632824d3633539727125763bc", "text": "NB-IoT system focues on indoor coverage, low cost, long battery life, and enabling a large number of connected devices. The NB-IoT system in the inband mode should share the antenna with the LTE system and support mult-PRB to cover many terminals. Also, the number of used antennas should be minimized for price competitiveness. In this paper, the structure and implementation of the NB-IoT base station system will be describe.", "title": "" }, { "docid": "8b79816cc07237489dafde316514702a", "text": "In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community.", "title": "" }, { "docid": "0fbd2e65c5d818736486ffb1ec5e2a6d", "text": "We establish linear profile decompositions for the fourth order linear Schrödinger equation and for certain fourth order perturbations of the linear Schrödinger equation, in dimensions greater than or equal to two. We apply these results to prove dichotomy results on the existence of extremizers for the associated Stein–Tomas/Strichartz inequalities; along the way, we also obtain lower bounds for the norms of these operators.", "title": "" }, { "docid": "8bbbaab2cf7825ca98937de14908e655", "text": "Software Reliability Model is categorized into two, one is static model and the other one is dynamic model. Dynamic models observe the temporary behavior of debugging process during testing phase. In Static Models, modeling and analysis of program logic is done on the same code. A Model which describes about error detection in software Reliability is called Software Reliability Growth Model. This paper reviews various existing software reliability models and there failure intensity function and the mean value function. On the basis of this review a model is proposed for the software reliability having different mean value function and failure intensity function.", "title": "" }, { "docid": "d4878e0d2aaf33bb5d9fc9c64605c4d2", "text": "Labeled Faces in the Wild (LFW) database has been widely utilized as the benchmark of unconstrained face verification and due to big data driven machine learning methods, the performance on the database approaches nearly 100%. However, we argue that this accuracy may be too optimistic because of some limiting factors. Besides different poses, illuminations, occlusions and expressions, crossage face is another challenge in face recognition. Different ages of the same person result in large intra-class variations and aging process is unavoidable in real world face verification. However, LFW does not pay much attention on it. Thereby we construct a Cross-Age LFW (CALFW) which deliberately searches and selects 3,000 positive face pairs with age gaps to add aging process intra-class variance. Negative pairs with same gender and race are also selected to reduce the influence of attribute difference between positive/negative pairs and achieve face verification instead of attributes classification. We evaluate several metric learning and deep learning methods on the new database. Compared to the accuracy on LFW, the accuracy drops about 10%-17% on CALFW.", "title": "" }, { "docid": "d6d3d2762bc45cc71be488b8e11712a8", "text": "NAND flash memory is being widely adopted as a storage medium for embedded devices. FTL (Flash Translation Layer) is one of the most essential software components in NAND flash-based embedded devices as it allows to use legacy files systems by emulating the traditional block device interface on top of NAND flash memory.\n In this paper, we propose a novel FTL, called μ-FTL. The main design goal of μ-FTL is to reduce the memory foot-print as small as possible, while providing the best performance by supporting multiple mapping granularities based on variable-sized extents. The mapping information is managed by μ-Tree, which offers an efficient index structure for NAND flash memory. Our evaluation results show that μ-FTL significantly outperforms other block-mapped FTLs with the same memory size by up to 89.7%.", "title": "" }, { "docid": "97b4de3dc73e0a6d7e17f94dff75d7ac", "text": "Evolution in cloud services and infrastructure has been constantly reshaping the way we conduct business and provide services in our day to day lives. Tools and technologies created to improve such cloud services can also be used to impair them. By using generic tools like nmap, hping and wget, one can estimate the placement of virtual machines in a cloud infrastructure with a high likelihood. Moreover, such knowledge and tools can also be used by adversaries to further launch various kinds of attacks. In this paper we focus on one such specific kind of attack, namely a denial of service (DoS), where an attacker congests a bottleneck network channel shared among virtual machines (VMs) coresident on the same physical node in the cloud infrastructure. We evaluate the behavior of this shared network channel using Click modular router on DETER testbed. We illustrate that game theoretic concepts can be used to model this attack as a two-player game and recommend strategies for defending against such attacks.", "title": "" }, { "docid": "bdb051eb50c3b23b809e06bed81710fc", "text": "PURPOSE\nTo test the hypothesis that physicians' empathy is associated with positive clinical outcomes for diabetic patients.\n\n\nMETHOD\nA correlational study design was used in a university-affiliated outpatient setting. Participants were 891 diabetic patients, treated between July 2006 and June 2009, by 29 family physicians. Results of the most recent hemoglobin A1c and LDL-C tests were extracted from the patients' electronic records. The results of hemoglobin A1c tests were categorized into good control (<7.0%) and poor control (>9.0%). Similarly, the results of the LDL-C tests were grouped into good control (<100) and poor control (>130). The physicians, who completed the Jefferson Scale of Empathy in 2009, were grouped into high, moderate, and low empathy scorers. Associations between physicians' level of empathy scores and patient outcomes were examined.\n\n\nRESULTS\nPatients of physicians with high empathy scores were significantly more likely to have good control of hemoglobin A1c (56%) than were patients of physicians with low empathy scores (40%, P < .001). Similarly, the proportion of patients with good LDL-C control was significantly higher for physicians with high empathy scores (59%) than physicians with low scores (44%, P < .001). Logistic regression analyses indicated that physicians' empathy had a unique contribution to the prediction of optimal clinical outcomes after controlling for physicians' and patients' gender and age, and patients' health insurance.\n\n\nCONCLUSIONS\nThe hypothesis of a positive relationship between physicians' empathy and patients' clinical outcomes was confirmed, suggesting that physicians' empathy is an important factor associated with clinical competence and patient outcomes.", "title": "" }, { "docid": "e4dd72a52d4961f8d4d8ee9b5b40d821", "text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.", "title": "" }, { "docid": "b02f5af836c0d18933de091044ccb916", "text": "This research presents a mobile augmented reality (MAR) travel guide, named CorfuAR, which supports personalized recommendations. We report the development process and devise a theoretical model that explores the adoption of MAR applications through their emotional impact. A field study on Corfu visitors (n=105) shows that the functional properties of CorfuAR evoke feelings of pleasure and arousal, which, in turn, influence the behavioral intention of using it. This is the first study that empirically validates the relation between functional system properties, user emotions, and adoption behavior. The paper discusses also the theoretical and managerial implications of our study.", "title": "" }, { "docid": "f7f1deeda9730056876db39b4fe51649", "text": "Fracture in bone occurs when an external force exercised upon the bone is more than what the bone can tolerate or bear. As, its consequence structure and muscular power of the bone is disturbed and bone becomes frail, which causes tormenting pain on the bone and ends up in the loss of functioning of bone. Accurate bone structure and fracture detection is achieved using various algorithms which removes noise, enhances image details and highlights the fracture region. Automatic detection of fractures from x-ray images is considered as an important process in medical image analysis by both orthopaedic and radiologic aspect. Manual examination of x-rays has multitude drawbacks. The process is time consuming and subjective. In this paper we discuss several digital image processing techniques applied in fracture detection of bone. This led us to study techniques that have been applied to images obtained from different modalities like x-ray, CT, MRI and ultrasound. Keywords— Fracture detection, Medical Imaging, Morphology, Tibia, X-ray image", "title": "" }, { "docid": "5c76caebe05acd7d09e6cace0cac9fe1", "text": "A program that detects people in images has a multitude of potential applications, including tracking for biomedical applications or surveillance, activity recognition for person-device interfaces (device control, video games), organizing personal picture collections, and much more. However, detecting people is difficult, as the appearance of a person can vary enormously because of changes in viewpoint or lighting, clothing style, body pose, individual traits, occlusion, and more. It then makes sense that the first people detectors were really detectors of pedestrians, that is, people walking at a measured pace on a sidewalk, and viewed from a fixed camera. Pedestrians are nearly always upright, their arms are mostly held along the body, and proper camera placement relative to pedestrian traffic can virtually ensure a view from the front or from behind (Figure 1). These factors reduce variation of appearance, although clothing, illumination, background, occlusions, and somewhat limited variations of pose still present very significant challenges.", "title": "" }, { "docid": "e45204012e5a12504cbb4831c9b5d629", "text": "The focus of this paper is the application of the theory of contingent tutoring to the design of a computer-based system designed to support learning in aspects of algebra. Analyses of interactions between a computer-based tutoring system and 42, 14and 15-year-old pupils are used to explore and explain the relations between individual di€erences in learner±tutor interaction, learners' prior knowledge and learning outcomes. Parallels between the results of these analyses and empirical investigations of help seeking in adult±child tutoring are drawn out. The theoretical signi®cance of help seeking as a basis for studying the impact of individual learner di€erences in the collaborative construction of `zones of proximal development' is assessed. In addition to demonstrating the signi®cance of detailed analyses of learner±system interaction as a basis for inferences about learning processes, the investigation also attempts to show the value of exploiting measures of on-line help seeking as a means of assessing learning transfer. Finally, the implications of the ®ndings for contingency theory are discussed, and the theoretical and practical bene®ts of integrating psychometric assessment, interaction process analyses, and knowledge-based learner modelling in the design and evaluation of computer-based tutoring are explored. # 2000 Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "11c4f0610d701c08516899ebf14f14c4", "text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.", "title": "" }, { "docid": "8a478da1c2091525762db35f1ac7af58", "text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.", "title": "" }, { "docid": "3b1a7539000a8ddabdaa4888b8bb1adc", "text": "This paper presents evaluations among the most usual maximum power point tracking (MPPT) techniques, doing meaningful comparisons with respect to the amount of energy extracted from the photovoltaic (PV) panel [tracking factor (TF)] in relation to the available power, PV voltage ripple, dynamic response, and use of sensors. Using MatLab/Simulink and dSPACE platforms, a digitally controlled boost dc-dc converter was implemented and connected to an Agilent Solar Array E4350B simulator in order to verify the analytical procedures. The main experimental results are presented for conventional MPPT algorithms and improved MPPT algorithms named IC based on proportional-integral (PI) and perturb and observe based on PI. Moreover, the dynamic response and the TF are also evaluated using a user-friendly interface, which is capable of online program power profiles and computes the TF. Finally, a typical daily insulation is used in order to verify the experimental results for the main PV MPPT methods.", "title": "" }, { "docid": "32670b62c6f6e7fa698e00f7cf359996", "text": "Four cases of self-poisoning with 'Roundup' herbicide are described, one of them fatal. One of the survivors had a protracted hospital stay and considerable clinical and laboratory detail is presented. Serious self-poisoning is associated with massive gastrointestinal fluid loss and renal failure. The management of such cases and the role of surfactant toxicity are discussed.", "title": "" }, { "docid": "b3fdd9e446c427022eee637f62ffefa4", "text": "Software maintenance constitutes a major phase of the software life cycle. Studies indicate that software maintenance is responsible for a significant percentage of a system’s overall cost and effort. The software engineering community has identified four major types of software maintenance, namely, corrective, perfective, adaptive, and preventive maintenance. Software maintenance can be seen from two major points of view. First, the classic view where software maintenance provides the necessary theories, techniques, methodologies, and tools for keeping software systems operational once they have been deployed to their operational environment. Most legacy systems subscribe to this view of software maintenance. The second view is a more modern emerging view, where maintenance is an integral part of the software development process and it should be applied from the early stages in the software life cycle. Regardless of the view by which we consider software maintenance, the fact is that it is the driving force behind software evolution, a very important aspect of a software system. This entry provides an in-depth discussion of software maintenance techniques, methodologies, tools, and emerging trends. Q1", "title": "" }, { "docid": "8ddb7c62f032fb07116e7847e69b51d1", "text": "Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools", "title": "" } ]
scidocsrr
5a992b99709ed3247ffcfab7aae6fe1f
LogView: Visualizing Event Log Clusters
[ { "docid": "4dc9360837b5793a7c322f5b549fdeb1", "text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering", "title": "" } ]
[ { "docid": "114d6c97f19bc29152ecda8fa2447f63", "text": "The game of Bridge provides a number of research areas to AI researchers due to the many components that constitute the game. Bidding provides the subtle challenge of potential outcome maximization while learning through information gathering, but constrained to a limited rule set. Declarer play can be accomplished through planning and inference. Both the bidding and the play can also be accomplished through Monte Carlo analysis using a perfect information solver. Double-dummy play is a perfect information search, but over an enormous state-space, and thus requires α-β pruning, transposition tables and other tree-minimization techniques. As such, researchers have made much progress in each of these sub-fields over the years, particularly double-dummy play, but are yet to produce a consistent expert level player.", "title": "" }, { "docid": "bc74c28794d9d6ae36ee6cfdc5fd04ac", "text": "This paper describes development of joint materials using only base metals (Cu and Sn) for power semiconductor assembly. The optimum composition at this moment is Cu8wt%Sn92wt% (8Cu92Sn hereafter) particles: pure Cu (100Cu hereafter) particles = 20:80 (wt% ratio), which indicates good stability under Thermal Cycling Test (TCT, −55°C∼+200°C, 20cycles). The composition indicated to be effective to eliminate voids and chip cracks. As an initial choice of joint material using TLPS (Transient Liquid Phase Sintering), we considered SAC305 might have good role as TLPS trigger. But, actual TCT results indicated that existence of Ag must have negative effect to eliminate voids from the joint region. Tentative behavior model using 8Cu92Sn and 100Cu joint material is proposed. Optimized composition indicated shear force 40MPa at 300°C. Re-melting point of the composition is 409°C after TLPS when there is additional Cu supply from substrate and terminal of mounted die.", "title": "" }, { "docid": "de99a984795645bc2e9fb4b3e3173807", "text": "Neural networks are a family of powerful machine learning models. is book focuses on the application of neural network models to natural language data. e first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. e second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. ese architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.", "title": "" }, { "docid": "0206cbec556e66fd19aa42c610cdccfa", "text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.", "title": "" }, { "docid": "bebd034597144d4656f6383d9bd22038", "text": "The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today’s social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.", "title": "" }, { "docid": "923745305f28130dc1e709360de4b97c", "text": "Segmenting brain MR scans could be highly beneficial for diagnosing, treating and evaluating the progress of specific diseases. Up to this point, manual segmentation, performed by experts, is the conventional method in hospitals and clinical environments. Although manual segmentation is accurate, it is time consuming, expensive and might not be reliable. Many non-automatic and semi automatic methods have been proposed in the literature in order to segment MR brain images, but the level of accuracy is not sufficiently comparable with the one of manual. The aim of this project is to implement and make a preliminary evaluation of a method based on machine learning technique for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) of brain MR scans using images available within the open MICCAI grand challenge (MRBrainS13). The proposed method employs supervised artificial neural network based auto-context algorithm, exploiting intensity-based, spatial-based and shape model-based level set segmentation results as features of the network. The obtained average results based on Dice similarity index were 96.98%, 95.35%, 80.95%, 88.36% and 84.71% for intracranial volume, brain (WM + GM), CSF, WM and GM respectively. This method achieved competitive results with considerably shorter required training time on MRBrainsS13 challenge.", "title": "" }, { "docid": "865cfae2da5ad3d1d10d21b1defdc448", "text": "During the last decade, novel immunotherapeutic strategies, in particular antibodies directed against immune checkpoint inhibitors, have revolutionized the treatment of different malignancies leading to an improved survival of patients. Identification of immune-related biomarkers for diagnosis, prognosis, monitoring of immune responses and selection of patients for specific cancer immunotherapies is urgently required and therefore areas of intensive research. Easily accessible samples in particular liquid biopsies (body fluids), such as blood, saliva or urine, are preferred for serial tumor biopsies.Although monitoring of immune and tumor responses prior, during and post immunotherapy has led to significant advances of patients' outcome, valid and stable prognostic biomarkers are still missing. This might be due to the limited capacity of the technologies employed, reproducibility of results as well as assay stability and validation of results. Therefore solid approaches to assess immune regulation and modulation as well as to follow up the nature of the tumor in liquid biopsies are urgently required to discover valuable and relevant biomarkers including sample preparation, timing of the collection and the type of liquid samples. This article summarizes our knowledge of the well-known liquid material in a new context as liquid biopsy and focuses on collection and assay requirements for the analysis and the technical developments that allow the implementation of different high-throughput assays to detect alterations at the genetic and immunologic level, which could be used for monitoring treatment efficiency, acquired therapy resistance mechanisms and the prognostic value of the liquid biopsies.", "title": "" }, { "docid": "41a3a4174a0fade6fb96ade0294c3eda", "text": "Recent development in fully convolutional neural network enables efficient end-to-end learning of semantic segmentation. Traditionally, the convolutional classifiers are taught to learn the representative semantic features of labeled semantic objects. In this work, we propose a reverse attention network (RAN) architecture that trains the network to capture the opposite concept (i.e., what are not associated with a target class) as well. The RAN is a three-branch network that performs the direct, reverse and reverse-attention learning processes simultaneously. Extensive experiments are conducted to show the effectiveness of the RAN in semantic segmentation. Being built upon the DeepLabv2-LargeFOV, the RAN achieves the state-of-the-art mean IoU score (48.1%) for the challenging PASCAL-Context dataset. Significant performance improvements are also observed for the PASCAL-VOC, Person-Part, NYUDv2 and ADE20K datasets.", "title": "" }, { "docid": "d5d621b131fa1f09e161a0f59c0e1313", "text": "This paper describes the modeling of distance relay using Matlab/Simulink package. SimPowerSystem toolbox was used for detailed modeling of distance relay, transmission line and fault simulation. Inside the modeling, single line to ground (SLG) fault was choose to be the fault type and Mho type distance characteristic was choose to be as the protection scheme. A graphical user interface (GUI) was created using GUI package inside Matlab for the developed model. With the interactive environment of graphical user interface, the difficulties in teaching of distance relay for undergraduate students can be eliminated. © 2013 The Authors. Published by Elsevier Ltd. Selection and/or peer-review under responsibility of the Research Management & Innovation Centre, Universiti Malaysia Perlis.", "title": "" }, { "docid": "15906c9bd84e55aec215843ef9e542a0", "text": "Recent growing interest in predicting and influencing consu mer behavior has generated a parallel increase in research efforts on Recommend er Systems. Many of the state-of-the-art Recommender Systems algorithms rely on o btaining user ratings in order to later predict unknown ratings. An underlying assumpt ion in this approach is that the user ratings can be treated as ground truth of the user’s t aste. However, users are inconsistent in giving their feedback, thus introducing an un known amount of noise that challenges the validity of this assumption. In this paper, we tackle the problem of analyzing and charact e izing the noise in user feedback through ratings of movies. We present a user st udy aimed at quantifying the noise in user ratings that is due to inconsistencies. We m easure RMSE values that range from0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.", "title": "" }, { "docid": "1dc41e5c43fc048bc1f1451eaa1ff764", "text": "249 words) + Body (6178 words) + 4 Figures = 7,427 Total Words Luis Fernando Molina molinac1@illinois.edu (217) 244-6063 Esther Resendiz eresendi@illinois.edu (217) 244-4174 J. Riley Edwards jedward2@illinois.edu (217) 244-7417 John M. Hart j-hart3@illinois.edu (217) 244-4174 Christopher P. L. Barkan cbarkan@illinois.edu (217) 244-6338 Narendra Ahuja ahuja@illinois.edu (217) 333-1837 3 Corresponding author Molina et al. 11-1442 2 ABSTRACT Individual railroad track maintenance standards and the Federal Railroad Administration (FRA)Individual railroad track maintenance standards and the Federal Railroad Administration (FRA) Track Safety Standards require periodic inspection of railway infrastructure to ensure safe and efficient operation. This inspection is a critical, but labor-intensive task that results in large annual operating expenditures and has limitations in speed, quality, objectivity, and scope. To improve the cost-effectiveness of the current inspection process, machine vision technology can be developed and used as a robust supplement to manual inspections. This paper focuses on the development and performance of machine vision algorithms designed to recognize turnout components, as well as the performance of algorithms designed to recognize and detect defects in other track components. In order to prioritize which components are the most critical for the safe operation of trains, a risk-based analysis of the FRA Accident Database was performed. Additionally, an overview of current technologies for track and turnout component condition assessment is presented. The machine vision system consists of a video acquisition system for recording digital images of track and customized algorithms to identify defects and symptomatic conditions within the images. A prototype machine vision system has been developed for automated inspection of rail anchors and cut spikes, as well as tie recognition. Experimental test results from the system have shown good reliability for recognizing ties, anchors, and cut spikes. This machine vision system, in conjunction with defect analysis and trending of historical data, will enhance the ability for longer-term predictive assessment of the health of the track system and its components. Molina et al. 11-1442 3 INTRODUCTION Railroads conduct regular inspections of their track in order to maintain safe and efficient operation. In addition to internal railroad inspection procedures, periodic track inspections are required under the Federal Railroad Administration (FRA) Track Safety Standards. The objective of this research is to investigate the feasibility of developing a machine vision system to make track inspection more efficient, effective, and objective. In addition, interim approaches to automated track inspection are possible, which will potentially lead to greater inspection effectiveness and efficiency prior to full machine vision system development and implementation. Interim solutions include video capture using vehicle-mounted cameras, image enhancement using image-processing software, and assisted automation using machine vision algorithms (1). The primary focus of this research is inspection of North American Class I railroad mainline and siding tracks, as these generally experience the highest traffic densities. High traffic densities necessitate frequent inspection and more stringent maintenance requirements, and leave railroads less time to accomplish it. This makes them the most likely locations for cost-effective investment in new, more efficient, but potentially more capital-intensive inspection technology. The algorithms currently under development will also be adaptable to many types of infrastructure and usage, including transit and some components of high-speed rail (HSR) infrastructure. The machine vision system described in this paper was developed through an interdisciplinary research collaboration at the University of Illinois at Urbana-Champaign (UIUC) between the Computer Vision and Robotics Laboratory (CVRL) at the Beckman Institute for Advanced Science and Technology and the Railroad Engineering Program in the Department of Civil and Environmental Engineering. CURRENT TRACK INSPECTION TECHNOLOGIES USING MACHINE VISION The international railroad community has undertaken significant research to develop innovative applications for advanced technologies with the objective of improving the process of visual track inspection. The development of machine vision, one such inspection technology which uses video cameras, optical sensors, and custom designed algorithms, began in the early 1990’s with work analyzing rail surface defects (2). Machine vision systems are currently in use or under development for a variety of railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface defects in the rail, rail profile, ballast profile, track gauge, intermodal loading efficiency, railcar structural components, and railcar safety appliances (1, 3-21, 23). The University of Illinois at Urbana-Champaign (UIUC) has been involved in multiple railroad machine-vision research projects sponsored by the Association of American Railroads (AAR), BNSF Railway, NEXTRANS Region V Transportation Center, and the Transportation Research Board (TRB) High-Speed Rail IDEA Program (6-11). In this section, we provide a brief overview of machine vision condition monitoring applications currently in use or under development for inspection of railway infrastructure. Railway applications of machine vision technology have three main elements: the image acquisition system, the image analysis system, and the data analysis system (1). The attributes and performance of each of these individual components determines the overall performance of a machine vision system. Therefore, the following review includes a discussion of the overall Molina et al. 11-1442 4 machine vision system, as well as approaches to image acquisition, algorithm development techniques, lighting methodologies, and experimental results. Rail Surface Defects The Institute of Digital Image Processing (IDIP) in Austria has developed a machine vision system for rail surface inspection during the rail manufacturing process (12). Currently, rail inspection is carried out by humans and complemented with eddy current systems. The objective of this machine vision system is to replace visual inspections on rail production lines. The machine vision system uses spectral image differencing procedure (SIDP) to generate threedimensional (3D) images and detect surface defects in the rails. Additionally, the cameras can capture images at speeds up to 37 miles per hour (mph) (60 kilometers per hour (kph)). Although the system is currently being used only in rail production lines, it can also be attached to an inspection vehicle for field inspection of rail. Additionally, the Institute of Intelligent Systems for Automation (ISSIA) in Italy has been researching and developing a system for detecting rail corrugation (13). The system uses images of 512x2048 pixels in resolution, artificial light, and classification of texture to identify surface defects. The system is capable of acquiring images at speeds of up to 125 mph (200 kph). Three image-processing methods have been proposed and evaluated by IISA: Gabor, wavelet, and Gabor wavelet. Gabor was selected as the preferred processing technique. Currently, the technology has been implemented through the patented system known as Visual Inspection System for Railways (VISyR). Rail Wear The Moscow Metro and the State of Common Means of Moscow developed photonic system to measure railhead wear (14). The system consists of 4 CCD cameras and 4 laser lights mounted on an inspection vehicle. The cameras are connected to a central computer that receives images every 20 nanoseconds (ns). The system extracts the profile of the rail using two methods (cut-off and tangent) and the results are ultimately compared with pre-established rail wear templates. Tie Condition The Georgetown Rail Equipment Company (GREX) has developed and commercialized a crosstie inspection system called AURORA (15). The objective of the system is to inspect and classify the condition of timber and concrete crossties. Additionally, the system can be adapted to measure rail seat abrasion (RSA) and detect defects in fastening systems. AURORA uses high-definition cameras and high-voltage lasers as part of the lighting arrangement and is capable of inspecting 70,000 ties per hour at a speed of 30-45 mph (48-72 kph). The system has been shown to replicate results obtained by track inspectors with an accuracy of 88%. Since 2008, Napier University in Sweden has been researching the use of machine vision technology for inspection of timber crossties (16). Their system evaluates the condition of the ends of the ties and classifies them into one of two categories: good or bad. This classification is performed by evaluating quantitative parameters such as the number, length, and depth of cracks, as well as the condition of the tie plate. Experimental results showed that the system has an accuracy of 90% with respect to the correct classification of ties. Future research work includes evaluation of the center portion of the ties and integration with other non-destructive testing (NDT) applications. Molina et al. 11-1442 5 In 2003, the University of Zaragoza in Spain began research on the development of machine vision techniques to inspect concrete crossties using a stereo-metric system to measure different surface shapes (17). The system is used to estimate the deviation from the required dimensional tolerances of the concrete ties in production lines. Two CCD cameras with a resolution of 768x512 pixels are used for image capture and lasers are used for artificial lighting. The system has been shown to produce reliable results, but quantifiable results were not found in the available literature. Ballast The ISS", "title": "" }, { "docid": "953851cb9cf9e755ec156fab79e3a818", "text": "We study minimization of the difference of l1 and l2 norms as a non-convex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for l1−2 minimization based on the difference of convex functions algorithm (DCA), and prove that it converges to a stationary point satisfying first order optimality condition. We propose a sparsity oriented simulated annealing (SA) procedure with non-Gaussian random perturbation and prove the almost sure convergence of the combined algorithm (DCASA) to a global minimum. Computation examples on success rates of sparse solution recovery show that if the sensing matrix is ill-conditioned (non RIP satisfying), then our method is better than existing non-convex compressed sensing solvers in the literature. Likewise in the magnetic resonance imaging (MRI) phantom image recovery problem, l1−2 succeeds with 8 projections. Irrespective of the conditioning of the sensing matrix, l1−2 is better than l1 in both the sparse signal and the MRI phantom image recovery problems.", "title": "" }, { "docid": "3e4fd502a999dcafb030a6898bd11f9b", "text": "We present several Hermite-type interpolation methods for rational cubics. In case the input data come from a circular arc, the rational cubic will reproduce it. keywords: Hermite interpolation, rational cubics, circular precision.", "title": "" }, { "docid": "f15508a8cd342cb6ea0ec2d0328503d7", "text": "An order book consists of a list of all buy and sell offers, represented by price and quantity, available to a market agent. The order book changes rapidly, within fractions of a second, due to new orders being entered into the book. The volume at a certain price level may increase due to limit orders, i.e. orders to buy or sell placed at the end of the queue, or decrease because of market orders or cancellations. In this paper a high-dimensional Markov chain is used to represent the state and evolution of the entire order book. The design and evaluation of optimal algorithmic strategies for buying and selling is studied within the theory of Markov decision processes. General conditions are provided that guarantee the existence of optimal strategies. Moreover, a value-iteration algorithm is presented that enables finding optimal strategies numerically. As an illustration a simple version of the Markov chain model is calibrated to high-frequency observations of the order book in a foreign exchange market. In this model, using an optimally designed strategy for buying one unit provides a significant improvement, in terms of the expected buy price, over a naive buy-one-unit strategy.", "title": "" }, { "docid": "9d90b8e88790e43d95d99bcfb8b3240a", "text": "With advances in knowledge disease, boundaries may change. Occasionally, these changes are of such a magnitude that they require redefinition of the disease. In recognition of the profound changes in our understanding of Parkinson's disease (PD), the International Parkinson and Movement Disorders Society (MDS) commissioned a task force to consider a redefinition of PD. This review is a discussion article, intended as the introductory statement of the task force. Several critical issues were identified that challenge current PD definitions. First, new findings challenge the central role of the classical pathologic criteria as the arbiter of diagnosis, notably genetic cases without synuclein deposition, the high prevalence of incidental Lewy body (LB) deposition, and the nonmotor prodrome of PD. It remains unclear, however, whether these challenges merit a change in the pathologic gold standard, especially considering the limitations of alternate gold standards. Second, the increasing recognition of dementia in PD challenges the distinction between diffuse LB disease and PD. Consideration might be given to removing dementia as an exclusion criterion for PD diagnosis. Third, there is increasing recognition of disease heterogeneity, suggesting that PD subtypes should be formally identified; however, current subtype classifications may not be sufficiently robust to warrant formal delineation. Fourth, the recognition of a nonmotor prodrome of PD requires that new diagnostic criteria for early-stage and prodromal PD should be created; here, essential features of these criteria are proposed. Finally, there is a need to create new MDS diagnostic criteria that take these changes in disease definition into consideration.", "title": "" }, { "docid": "a2f65eb4a81bc44ea810d834ab33d891", "text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.", "title": "" }, { "docid": "10b6750b3f7a589463122b55b5776a7a", "text": "This article reviews research and interventions that have grown up around a model of psychological well-being generated more than two decades ago to address neglected aspects of positive functioning such as purposeful engagement in life, realization of personal talents and capacities, and enlightened self-knowledge. The conceptual origins of this formulation are revisited and scientific products emerging from 6 thematic areas are examined: (1) how well-being changes across adult development and later life; (2) what are the personality correlates of well-being; (3) how well-being is linked with experiences in family life; (4) how well-being relates to work and other community activities; (5) what are the connections between well-being and health, including biological risk factors, and (6) via clinical and intervention studies, how psychological well-being can be promoted for ever-greater segments of society. Together, these topics illustrate flourishing interest across diverse scientific disciplines in understanding adults as striving, meaning-making, proactive organisms who are actively negotiating the challenges of life. A take-home message is that increasing evidence supports the health protective features of psychological well-being in reducing risk for disease and promoting length of life. A recurrent and increasingly important theme is resilience - the capacity to maintain or regain well-being in the face of adversity. Implications for future research and practice are considered.", "title": "" }, { "docid": "a32c635c1f4f4118da20cee6ffb5c1ea", "text": "We analyzed the influence of education and of culture on the neuropsychological profile of an indigenous and a nonindigenous population. The sample included 27 individuals divided into four groups: (a) seven illiterate Maya indigenous participants, (b) six illiterate Pame indigenous participants, (c) seven nonindigenous participants with no education, and (d) seven Maya indigenous participants with 1 to 4 years of education . A brief neuropsychological test battery developed and standardized in Mexico was individually administered. Results demonstrated differential effects for both variables. Both groups of indigenous participants (Maya and Pame) obtained higher scores in visuospatial tasks, and the level of education had significant effects on working and verbal memory. Our data suggested that culture dictates what it is important for survival and that education could be considered as a type of subculture that facilitates the development of certain skills.", "title": "" }, { "docid": "ef3cb4e591f52498584495caacc74069", "text": "The Hill-Sachs lesion is an osseous defect of the humeral head that is typically associated with anterior shoulder instability. The incidence of these lesions in the setting of glenohumeral instability is relatively high and approaches 100% in persons with recurrent anterior shoulder instability. Reverse Hill-Sachs lesion has been described in patients with posterior shoulder instability. Glenoid bone loss is typically associated with the Hill-Sachs lesion in patients with recurrent anterior shoulder instability. The lesion is a bipolar injury, and identification of concomitant glenoid bone loss is essential to optimize clinical outcome. Other pathology (eg, Bankart tear, labral or capsular injuries) must be identified, as well. Treatment is dictated by subjective and objective findings of shoulder instability and radiographic findings. Nonsurgical management, including focused rehabilitation, is acceptable in cases of small bony defects and nonengaging lesions in which the glenohumeral joint remains stable during desired activities. Surgical options include arthroscopic and open techniques.", "title": "" } ]
scidocsrr
074441ec90fbdfcc8f349e7c1b9e4e10
A feature study for classification-based speech separation at very low signal-to-noise ratio
[ { "docid": "1cd45a4f897ea6c473d00c4913440836", "text": "What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation.", "title": "" } ]
[ { "docid": "47afccb5e7bcdade764666f3b5ab042e", "text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.", "title": "" }, { "docid": "2ebb21cb1c6982d2d3839e2616cac839", "text": "In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze.", "title": "" }, { "docid": "0be66cf5af756aa7bc37e4b452419c45", "text": "Fact checking has captured the attention of the media and the public alike; it has also recently received strong attention from the computer science community, in particular from data and knowledge management, natural language processing and information retrieval; we denote these together under the term “content management”. In this paper, we identify the fact checking tasks which can be performed with the help of content management technologies, and survey the recent research works in this area, before laying out some perspectives for the future. We hope our work will provide interested researchers, journalists and fact checkers with an entry point in the existing literature as well as help develop a roadmap for future research and development work.", "title": "" }, { "docid": "280e83986138daf0237e7502747b8a50", "text": "E-government adoption is the focus of many research studies. However, few studies have compared the adoption factors to identify the most salient predictors of e-government use. This study compares popular adoption constructs to identify the most influential. A survey was administered to elicit citizen perceptions of e-government services. The results of stepwise regression indicate perceived usefulness, trust of the internet, previous use of an e-government service and perceived ease of use all have a significant impact on one’s intention to use an e-government service. The implications for research and practice are discussed below.", "title": "" }, { "docid": "c23008c36f0bca7a1faf405c5f3083ff", "text": "The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.", "title": "" }, { "docid": "2ec37b57a75c70e9edeb9603b0dac5e0", "text": "In this paper, different analysis and design techniques are used to analyze the drive motor in the 2004 Prius hybrid vehicle and to examine alternative spoke-type magnet rotor (buried magnets with magnetization which is orthogonal to the radial direction) and induction motor arrangements. These machines are characterized by high transient torque requirement, compactness, and forced cooling. While rare-earth magnet machines are commonly used in these applications, there is an increasing interest in motors without magnets, hence the investigation of an induction motor. This paper illustrates that the machines operate under highly saturated conditions at high torque and that care should be taken when selecting the correct analysis technique. This is illustrated by divergent results when using I-Psi loops and dq techniques to calculate the torque.", "title": "" }, { "docid": "5e6209b4017039a809f605d0847a57af", "text": "Bag-of-ngrams (BoN) models are commonly used for representing text. One of the main drawbacks of traditional BoN is the ignorance of n-gram’s semantics. In this paper, we introduce the concept of Neural Bag-of-ngrams (Neural-BoN), which replaces sparse one-hot n-gram representation in traditional BoN with dense and rich-semantic n-gram representations. We first propose context guided n-gram representation by adding n-grams to word embeddings model. However, the context guided learning strategy of word embeddings is likely to miss some semantics for text-level tasks. Text guided ngram representation and label guided n-gram representation are proposed to capture more semantics like topic or sentiment tendencies. Neural-BoN with the latter two n-gram representations achieve state-of-the-art results on 4 documentlevel classification datasets and 6 semantic relatedness categories. They are also on par with some sophisticated DNNs on 3 sentence-level classification datasets. Similar to traditional BoN, Neural-BoN is efficient, robust and easy to implement. We expect it to be a strong baseline and be used in more real-world applications.", "title": "" }, { "docid": "151cb6f067634d915f24865c16425277", "text": "We describe a framework for using analytics to proactively tackle voluntary attrition of employees. This is especially important in organizations with large services arms where unplanned departures of key employees can lead to big losses by way of lost productivity, delayed or missed deadlines, and hiring costs of replacements. By proactively identifying top talent at a high risk of voluntarily leaving, an organization can take appropriate action in time to actually affect such employee departures, thereby avoiding financial and knowledge losses. The main retention action we study in this paper is that of proactive salary raises to at-risk employees. Our approach uses data mining for identifying employees at risk of attrition and balances the cost of attrition/replacement of an employee against the cost of retaining that employee (by way of increased salary) to enable the optimal use of limited funds that may be available for this purpose, thereby allowing the action to be targeted towards employees with the highest potential returns on investment. This approach has been used to do a proactive retention action for several thousand employees across several geographies and business units for a large, Fortune 500 multinational company. We discuss this action and discuss the results to date that show a significant reduction in voluntary resignations of the targeted groups.", "title": "" }, { "docid": "5419504f65f3ae634f064f692f38f38f", "text": "Part-of-speech tagging is an important preprocessing step in many natural language processing applications. Despite much work already carried out in this field, there is still room for improvement, especially in Portuguese. We experiment here with an architecture based on neural networks and word embeddings, and that has achieved promising results in English. We tested our classifier in different corpora: a new revision of the Mac-Morpho corpus, in which we merged some tags and performed corrections and two previous versions of it. We evaluate the impact of using different types of word embeddings and explicit features as input. We compare our tagger’s performance with other systems and achieve state-of-the-art results in the new corpus. We show how different methods for generating word embeddings and additional features differ in accuracy. The work reported here contributes with a new revision of the Mac-Morpho corpus and a state-of-the-art new tagger available for use out-of-the-box.", "title": "" }, { "docid": "aa58cb2b2621da6260aeb203af1bd6f1", "text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.", "title": "" }, { "docid": "918bf13ef0289eb9b78309c83e963b26", "text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.", "title": "" }, { "docid": "dc817bc11276d76f8d97f67e4b1b2155", "text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.", "title": "" }, { "docid": "472e9807c2f4ed6d1e763dd304f22c64", "text": "Commercial analytical database systems suffer from a high \"time-to-first-analysis\": before data can be processed, it must be modeled and schematized (a human effort), transferred into the database's storage layer, and optionally clustered and indexed (a computational effort). For many types of structured data, this upfront effort is unjustifiable, so the data are processed directly over the file system using the Hadoop framework, despite the cumulative performance benefits of processing this data in an analytical database system. In this paper we describe a system that achieves the immediate gratification of running MapReduce jobs directly over a file system, while still making progress towards the long-term performance benefits of database systems. The basic idea is to piggyback on MapReduce jobs, leverage their parsing and tuple extraction operations to incrementally load and organize tuples into a database system, while simultaneously processing the file system data. We call this scheme Invisible Loading, as we load fractions of data at a time at almost no marginal cost in query latency, but still allow future queries to run much faster.", "title": "" }, { "docid": "7b7f1f029e13008b1578c87c7319b645", "text": "This paper presents the design and manufacturing processes of a new piezoactuated XY stage with integrated parallel, decoupled, and stacked kinematics structure for micro-/nanopositioning application. The flexure-based XY stage is composed of two decoupled prismatic-prismatic limbs which are constructed by compound parallelogram flexures and compound bridge-type displacement amplifiers. The two limbs are assembled in a parallel and stacked manner to achieve a compact stage with the merits of parallel kinematics. Analytical models for the mechanical performance assessment of the stage in terms of kinematics, statics, stiffness, load capacity, and dynamics are derived and verified with finite element analysis. A prototype of the XY stage is then fabricated, and its decoupling property is tested. Moreover, the Bouc-Wen hysteresis model of the system is identified by resorting to particle swarm optimization, and a control scheme combining the inverse hysteresis model-based feedforward with feedback control is employed to compensate for the plant nonlinearity and uncertainty. Experimental results reveal that a submicrometer accuracy single-axis motion tracking and biaxial contouring can be achieved by the micropositioning system, which validate the effectiveness of the proposed mechanism and controller designs as well.", "title": "" }, { "docid": "1d15d5e8176aea14713a7f7b426d41aa", "text": "In this work we present a deep learning framework for video compressive sensing. The proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches. Our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results. We then extend the linear formulation to deep fully-connected networks and explore the performance gains using deeper architectures. Our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures. Extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively. Finally, our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation.", "title": "" }, { "docid": "e9f8bf1d0a1ffaf97da66578779a5c4e", "text": "preadsheets have proven highly successful for interacting with numerical data, such as applying algebraic operations, defining data propagation relationships, manipulating rows or columns, and exploring \" what-if \" scenarios. Spread-sheet techniques have recently been extended from numeric domains to other domains. 1,2 Here we present a spreadsheet approach to displaying and exploring information visu-alizations, with large, abstract, multidimensional data sets that are visually represented in multiple ways. We illustrate how spread-sheet techniques provide a struc-tured, intuitive, and powerful interface for investigating information visualizations. An earlier version of this article appeared in the proceedings of the 1997 Information Visualization Symposium. 3 Here we refocus the discussion to illustrate principles that make the spreadsheet approach powerful. These principles show how we can perform many user tasks easily in the visu-alization spreadsheet that prove much more difficult using other approaches. The visualization spreadsheet's benefit comes from enabling users to build multiple visual representations of several data sets, perform operations on these visu-alizations together or separately, and compare and contrast them visually. These operations are becoming ever more important as we realize certain interaction capabilities are critical, such as exploring different views of the data interactively, applying operations like rotation or data filtering to a group of views, and comparing two or more related data sets. These operations fit naturally into a spreadsheet environment. These benefits derive from the way spreadsheets span a range of user interactions. On the one hand, spreadsheets directly benefit end users, because the direct manipulation interface makes it easy to view, navigate, and interact with the data. On the other hand, spreadsheets provide a flexible and easy-to-learn environment for user programming. The success of spreadsheet–based structured interaction eliminates many of the stumbling blocks in traditional programming environments. Spreadsheet developers create templates that enable end users to reliably repeat often-needed computations without the effort of redevelopment or coding. Users do not have to worry about the data dependencies between data sets or memory management. These programming idiosyncrasies are taken care of automatically. By providing a natural environment to explore and apply operations on data, visualization spreadsheets easily enable the exploration of data sets. What is a visualization spreadsheet? Based on our experiences and drawing on others' past work, 1-3 we define the spreadsheet paradigm's characteristics as follows: s The tabular layout lets users view collections of visu-alizations simultaneously. Cells can handle large data sets instead of a few numbers. s …", "title": "" }, { "docid": "48af87459dedc417c1ad090fc72ee3d1", "text": "Four studies examined English-speaking children's productivity with word order and verb morphology. Two- and 3-year-olds were taught novel transitive verbs with experimentally controlled argument structures. The younger children neither used nor comprehended word order with these verbs; older children comprehended and used word order correctly to mark agents and patients of the novel verbs. Children as young as 2 years 1 month added -ing but not -ed to verb stems; older children were productive with both inflections. These studies demonstrate that the present progressive inflection is used productively before the regular past tense marker and suggest that productivity with word order may be independent of developments in verb morphology. The findings are discussed in terms of M. Tomasello's (1992a) Verb Island hypothesis and M. Rispoli's (1991) notion of the mosaic acquisition of grammatical relations.", "title": "" }, { "docid": "c2f620287606a2e233e2d3654c64c016", "text": "Urban terrain is complex and they present a very challenging and difficult environment for simulating virtual forces as well as for rendering. The objective of this work is to research on Binary Space Partition technique (BSP) for modeling urban terrain environments. BSP is a method for recursively subdividing a space into convex sets by hyper-planes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree. Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.", "title": "" }, { "docid": "0873dd0181470d722f0efcc8f843eaa6", "text": "Compared to traditional service, the characteristics of the customer behavior in electronic service are personalized demand, convenient consumed circumstance and perceptual consumer behavior. Therefore, customer behavior is an important factor to facilitate online electronic service. The purpose of this study is to explore the key success factors affecting customer purchase intention of electronic service through the behavioral perspectives of customers. Based on the theory of technology acceptance model (TAM) and self service technology (SST), the study proposes a theoretical model for the empirical examination of the customer intention for purchasing electronic services. A comprehensive survey of online customers having e-shopping experiences is undertaken. Then this model is tested by means of the statistical analysis method of structure equation model (SEM). The empirical results indicated that perceived usefulness and perceived assurance have a significant impact on purchase in e-service. Discussion and implication are presented in the end.", "title": "" }, { "docid": "83580c373e9f91b021d90f520011a5da", "text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.", "title": "" } ]
scidocsrr
dfd633d8d54f866d069cc6db87a652c7
Design of lane keeping assist system for autonomous vehicles
[ { "docid": "71c81eb75f55ad6efaf8977b93e6dbef", "text": "Autonomous vehicle navigation is challenging since various types of road scenarios in real urban environments have to be considered, particularly when only perception sensors are used, without position information. This paper presents a novel real-time optimal-drivable-region and lane detection system for autonomous driving based on the fusion of light detection and ranging (LIDAR) and vision data. Our system uses a multisensory scheme to cover the most drivable areas in front of a vehicle. We propose a feature-level fusion method for the LIDAR and vision data and an optimal selection strategy for detecting the best drivable region. Then, a conditional lane detection algorithm is selectively executed depending on the automatic classification of the optimal drivable region. Our system successfully handles both structured and unstructured roads. The results of several experiments are provided to demonstrate the reliability, effectiveness, and robustness of the system.", "title": "" } ]
[ { "docid": "396f6b6c09e88ca8e9e47022f1ae195b", "text": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level.", "title": "" }, { "docid": "d49e6b7c6da44fae798e94dcb3a90c88", "text": "Given a photo collection of “unconstrained” face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.", "title": "" }, { "docid": "65289003b014d86eed03baad6aa1ed83", "text": "Camera calibration is one of the long existing research issues in computer vision domain. Typical calibration methods take two steps for the procedure: control points localization and camera parameters computation. In practical situation, control points localization is a time-consuming task because the localization puts severe assumption that the calibration object should be visible in all images. To satisfy the assumption, users may avoid moving the calibration object near the image boundary. As a result, we estimate poor quality parameters. In this paper, we aim to solve this partial occlusion problem of the calibration object. To solve the problem, we integrate a planar marker tracking algorithm that can track its target marker even with partial occlusion. Specifically, we localize control points by a RANdom DOts Markers (RANDOM) tracking algorithm that uses markers with randomly distributed circle dots. Once the control points are localized, they are used to estimate the camera parameters. The proposed method is validated with both synthetic and real world experiments. The experimental results show that the proposed method realizes camera calibration from image on which part of the calibration object is visible.", "title": "" }, { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" }, { "docid": "e0b85ff6cd78f1640f25215ede3a39e6", "text": "Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.", "title": "" }, { "docid": "d8472e56a4ffe5d6b0cb0c902186d00b", "text": "In C. S. Peirce, as well as in the work of many biosemioticians, the semiotic object is sometimes described as a physical “object” with material properties and sometimes described as an “ideal object” or mental representation. I argue that to the extent that we can avoid these types of characterizations we will have a more scientific definition of sign use and will be able to better integrate the various fields that interact with biosemiotics. In an effort to end Cartesian dualism in semiotics, which has been the main obstacle to a scientific biosemiotics, I present an argument that the “semiotic object” is always ultimately the objective of self-affirmation (of habits, physical or mental) and/or self-preservation. Therefore, I propose a new model for the sign triad: response-sign-objective. With this new model it is clear, as I will show, that self-mistaking (not self-negation as others have proposed) makes learning, creativity and purposeful action possible via signs. I define an “interpretation” as a response to something as if it were a sign, but whose semiotic objective does not, in fact, exist. If the response-as-interpretation turns out to be beneficial for the system after all, there is biopoiesis. When the response is not “interpretive,” but self-confirming in the usual way, there is biosemiosis. While the conditions conducive to fruitful misinterpretation (e.g., accidental similarity of non-signs to signs and/or contiguity of non-signs to self-sustaining processes) might be artificially enhanced, according to this theory, the outcomes would be, by nature, more or less uncontrollable and unpredictable. Nevertheless, biosemiotics could be instrumental in the manipulation and/or artificial creation of purposeful systems insofar as it can describe a formula for the conditions under which new objectives and novel purposeful behavior may emerge, however unpredictably.", "title": "" }, { "docid": "d4cf47c898268ffe01dc9aab75810d7c", "text": "In this paper, a new robust fault detection and isolation (FDI) methodology for an unmanned aerial vehicle (UAV) is proposed. The fault diagnosis scheme is constructed based on observer-based techniques according to fault models corresponding to each component (actuator, sensor, and structure). The proposed fault diagnosis method takes advantage of the structural perturbation of the UAV model due to the icing (the main structural fault in aircraft), sensor, and actuator faults to reduce the error of observers that are used in the FDI module in addition to distinguishing among faults in different components. Moreover, the accuracy of the FDI module is increased by considering the structural perturbation of the UAV linear model due to wind disturbances which is the major environmental disturbance affecting an aircraft. Our envisaged FDI strategy is capable of diagnosing recurrent faults through properly designed residuals with different responses to different types of faults. Simulation results are provided to illustrate and demonstrate the effectiveness of our proposed FDI approach due to faults in sensors, actuators, and structural components of unmanned aerial vehicles.", "title": "" }, { "docid": "ad6bb165620dafb7dcadaca91c9de6b0", "text": "This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression.", "title": "" }, { "docid": "5946378b291a1a0e1fb6df5cd57d716f", "text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)", "title": "" }, { "docid": "0fc08886411f225a3e5e767be3b6fd39", "text": "To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models. We also demonstrate for the first time ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Together, these two techniques offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.", "title": "" }, { "docid": "9c3aed8548b61b70ae35be98050fb4bf", "text": "In the present work, a widely tunable high-Q air filled evanescent cavity bandpass filter is created in an LTCC substrate. A low loss Rogers Duroidreg flexible substrate forms the top of the filter, acting as a membrane for a tunable parasitic capacitor that allows variable frequency loading. A commercially available piezoelectric actuator is mounted on the Duroidreg substrate for precise electrical tuning of the filter center frequency. The filter is tuned from 2.71 to 4.03 GHz, with insertion losses ranging from 1.3 to 2.4 dB across the range for a 2.5% bandwidth filter. Secondarily, an exceptionally narrow band filter is fabricated to show the potential for using the actuators to fine tune the response to compensate for fabrication tolerances. While most traditional machining techniques would not allow for such narrow band filtering, the high-Q and the sensitive tuning combine to allow for near channel selection for a front-end receiver. For further analysis, a widely tunable resonator is also created with a 100% tunable frequency range, from 2.3 to 4.6 GHz. The resonator analysis gives unloaded quality factors ranging from 360 to 700 with a maximum frequency loading of 89%. This technique shows a lot of promise for tunable RF filtering applications.", "title": "" }, { "docid": "0e8dbf7567f183c314b55890cad98050", "text": "Differential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.", "title": "" }, { "docid": "61165fc9e404ef0fdf3c2525845cf032", "text": "The automated comparison of points of view between two politicians is a very challenging task, due not only to the lack of annotated resources, but also to the different dimensions participating to the definition of agreement and disagreement. In order to shed light on this complex task, we first carry out a pilot study to manually annotate the components involved in detecting agreement and disagreement. Then, based on these findings, we implement different features to capture them automatically via supervised classification. We do not focus on debates in dialogical form, but we rather consider sets of documents, in which politicians may express their position with respect to different topics in an implicit or explicit way, like during an electoral campaign. We create and make available three different datasets.", "title": "" }, { "docid": "d026b12bedce1782a17654f19c7dcdf7", "text": "The millions of movies produced in the human history are valuable resources for computer vision research. However, learning a vision model from movie data would meet with serious difficulties. A major obstacle is the computational cost – the length of a movie is often over one hour, which is substantially longer than the short video clips that previous study mostly focuses on. In this paper, we explore an alternative approach to learning vision models from movies. Specifically, we consider a framework comprised of a visual module and a temporal analysis module. Unlike conventional learning methods, the proposed approach learns these modules from different sets of data – the former from trailers while the latter from movies. This allows distinctive visual features to be learned within a reasonable budget while still preserving long-term temporal structures across an entire movie. We construct a large-scale dataset for this study and define a series of tasks on top. Experiments on this dataset showed that the proposed method can substantially reduce the training time while obtaining highly effective features and coherent temporal structures.", "title": "" }, { "docid": "2509b427f650c7fc54cdb5c38cdb2bba", "text": "Inbreeding depression on female fertility and calving ease in Spanish dairy cattle was studied by the traditional inbreeding coefficient (F) and an alternative measurement indicating the inbreeding rate (DeltaF) for each animal. Data included records from 49,497 and 62,134 cows for fertility and calving ease, respectively. Both inbreeding measurements were included separately in the routine genetic evaluation models for number of insemination to conception (sequential threshold animal model) and calving ease (sire-maternal grandsire threshold model). The F was included in the model as a categorical effect, whereas DeltaF was included as a linear covariate. Inbred cows showed impaired fertility and tended to have more difficult calvings than low or noninbred cows. Pregnancy rate decreased by 1.68% on average for cows with F from 6.25 to 12.5%. This amount of inbreeding, however, did not seem to increase dystocia incidence. Inbreeding depression was larger for F greater than 12.5%. Cows with F greater than 25% had lower pregnancy rate and higher dystocia rate (-6.37 and 1.67%, respectively) than low or noninbred cows. The DeltaF had a significant effect on female fertility. A DeltaF = 0.01, corresponding to an inbreeding coefficient of 5.62% for the average equivalent generations in the data used (5.68), lowered pregnancy rate by 1.5%. However, the posterior estimate for the effect of DeltaF on calving ease was not significantly different from zero. Although similar patterns were found with both F and DeltaF, the latter detected a lowered pregnancy rate at an equivalent F, probably because it may consider the known depth of the pedigree. The inbreeding rate might be an alternative choice to measure inbreeding depression.", "title": "" }, { "docid": "15a079037d3dbb1b08591c0a3c8e0804", "text": "The paper offers an introduction and a road map to the burgeoning literature on two-sided markets. In many industries, platforms court two (or more) sides that use the platform to interact with each other. The platforms’ usage or variable charges impact the two sides’ willingness to trade, and thereby their net surpluses from potential interactions; the platforms’ membership or fixed charges in turn determine the end-users’ presence on the platform. The platforms’ fine design of the structure of variable and fixed charges is relevant only if the two sides do not negotiate away the corresponding usage and membership externalities. The paper first focuses on usage charges and provides conditions for the allocation of the total usage charge (e.g., the price of a call or of a payment card transaction) between the two sides not to be neutral; the failure of the Coase theorem is necessary but not sufficient for two-sidedness. Second, the paper builds a canonical model integrating usage and membership externalities. This model allows us to unify and compare the results obtained in the two hitherto disparate strands of the literature emphasizing either form of externality; and to place existing membership (or indirect) externalities models on a stronger footing by identifying environments in which these models can accommodate usage pricing. We also obtain general results on usage pricing of independent interest. Finally, the paper reviews some key economic insights on platform price and non-price strategies.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "949a5da7e1a8c0de43dbcb7dc589851c", "text": "Silicon photonics devices offer promising solution to meet the growing bandwidth demands of next-generation interconnects. This paper presents a 5 × 25 Gb/s carrier-depletion microring-based wavelength-division multiplexing (WDM) transmitter in 65 nm CMOS. An AC-coupled differential driver is proposed to realize 4 × VDD output swing as well as tunable DC-biasing. The proposed transmitter incorporates 2-tap asymmetric pre-emphasis to effectively cancel the optical nonlinearity of the ring modulator. An average-power-based dynamic wavelength stabilization loop is also demonstrated to compensate for thermal induced resonant wavelength drift. At 25 Gb/s operation, each transmitter channel consumes 113.5 mW and maintains 7 dB extinction ratio with a 4.4 V pp-diff output swing in the presence of thermal fluctuations.", "title": "" }, { "docid": "61f079cb59505d9bf1de914330dd852e", "text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.", "title": "" }, { "docid": "83692fd5290c7c2a43809e1e2014566d", "text": "Humans have a biological predisposition to form attachment to social partners, and they seem to form attachment even toward non-human and inanimate targets. Attachment styles influence not only interpersonal relationships, but interspecies and object attachment as well. We hypothesized that young people form attachment toward their mobile phone, and that people with higher attachment anxiety use the mobile phone more likely as a compensatory attachment target. We constructed a scale to observe people's attachment to their mobile and we assessed their interpersonal attachment style. In this exploratory study we found that young people readily develop attachment toward their phone: they seek the proximity of it and experience distress on separation. People's higher attachment anxiety predicted higher tendency to show attachment-like features regarding their mobile. Specifically, while the proximity of the phone proved to be equally important for people with different attachment styles, the constant contact with others through the phone was more important for anxiously attached people. We conclude that attachment to recently emerged artificial objects, like the mobile may be the result of cultural co-option of the attachment system. People with anxious attachment style may face challenges as the constant contact and validation the computer-mediated communication offers may deepen their dependence on others. © 2016 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
db93dfb7cc18d8256679930d7c972511
CNN architectures for large-scale audio classification
[ { "docid": "9787d99954114de7ddd5a58c18176380", "text": "This paper presents a system for acoustic event detection in recordings from real life environments. The events are modeled using a network of hidden Markov models; their size and topology is chosen based on a study of isolated events recognition. We also studied the effect of ambient background noise on event classification performance. On real life recordings, we tested recognition of isolated sound events and event detection. For event detection, the system performs recognition and temporal positioning of a sequence of events. An accuracy of 24% was obtained in classifying isolated sound events into 61 classes. This corresponds to the accuracy of classifying between 61 events when mixed with ambient background noise at 0dB signal-to-noise ratio. In event detection, the system is capable of recognizing almost one third of the events, and the temporal positioning of the events is not correct for 84% of the time.", "title": "" }, { "docid": "afee419227629f8044b5eb0addd65ce3", "text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "45252c6ffe946bf0f9f1984f60ffada6", "text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.", "title": "" }, { "docid": "74141327edf56eb5a198f446d12998a0", "text": "Intramuscular myxomas of the hand are rare entities. Primarily found in the myocardium, these lesions also affect the bone and soft tissues in other parts of the body. This article describes a case of hypothenar muscles myxoma treated with local surgical excision after frozen section biopsy with tumor-free margins. Radiographic images of the axial and appendicular skeleton were negative for fibrous dysplasia, and endocrine studies were within normal limits. The 8-year follow-up period has been uneventful, with no complications. The patient is currently recurrence free, with normal intrinsic hand function.", "title": "" }, { "docid": "2f138f030565d85e4dcd9f90585aecb0", "text": "One of the central questions in neuroscience is how particular tasks, or computations, are implemented by neural networks to generate behavior. The prevailing view has been that information processing in neural networks results primarily from the properties of synapses and the connectivity of neurons within the network, with the intrinsic excitability of single neurons playing a lesser role. As a consequence, the contribution of single neurons to computation in the brain has long been underestimated. Here we review recent work showing that neuronal dendrites exhibit a range of linear and nonlinear mechanisms that allow them to implement elementary computations. We discuss why these dendritic properties may be essential for the computations performed by the neuron and the network and provide theoretical and experimental examples to support this view. 503 A nn u. R ev . N eu ro sc i. 20 05 .2 8: 50 353 2. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by M as sa ch us et ts I ns tit ut e of T ec hn ol og y (M IT ) on 0 6/ 26 /1 4. F or p er so na l u se o nl y. AR245-NE28-18 ARI 13 May 2005 14:15", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" }, { "docid": "33cd162dc2c0132dbd4153775a569c5d", "text": "The question whether preemptive systems are better than non-preemptive systems has been debated for a long time, but only partial answers have been provided in the real-time literature and still some issues remain open. In fact, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. In particular, limiting preemptions allows increasing program locality, making timing analysis more predictable with respect to the fully preemptive case. In this paper, we integrate the features of both preemptive and non-preemptive scheduling by considering that each task can switch to non-preemptive mode, at any time, for a bounded interval. Three methods (with different complexity and performance) are presented to calculate the longest non-preemptive interval that can be executed by each task, under fixed priorities, without degrading the schedulability of the task set, with respect to the fully preemptive case. The methods are also compared by simulations to evaluate their effectiveness in reducing the number of preemptions.", "title": "" }, { "docid": "cc1ce5471be55747faaa14e28e6eb814", "text": "A meta-analysis was performed to quantify the association between antisocial behavior (ASB) and performance on executive functioning (EF) measures. The metaanalysis expanded on Morgan and Lilienfeld’s (2000) meta-analysis of the same topic by including studies published between 1997 and 2008 and by examining a wider range of EF measures. A total of 42 studies (2,595 participants) were included in the present meta-analysis. Overall, the mean effect size indicated that antisocial groups performed 0.47 standard deviations worse on EF measures compared to control groups. This effect size was in the medium range, compared to the medium to large 0.62 average weighted mean effect size produced by Morgan and Lilienfeld. There was significant variation in calculated effect sizes across studies, indicating that the overall mean effect size was not representative of the association between ASB and EF. Effect size magnitude varied according to ASB groups and measures of EF. Cognitive impairments in ASB were not specific to EF. Other methodological issues in the research literature and implications of the meta-analysis results are discussed and directions for future research are proposed.", "title": "" }, { "docid": "fcf46a98f9e77c83e4946bc75fb97849", "text": "Recent work on sequence to sequence translation using Recurrent Neural Networks (RNNs) based on Long Short Term Memory (LSTM) architectures has shown great potential for learning useful representations of sequential data. A oneto-many encoder-decoder(s) scheme allows for a single encoder to provide representations serving multiple purposes. In our case, we present an LSTM encoder network able to produce representations used by two decoders: one that reconstructs, and one that classifies if the training sequence has an associated label. This allows the network to learn representations that are useful for both discriminative and reconstructive tasks at the same time. This paradigm is well suited for semi-supervised learning with sequences and we test our proposed approach on an action recognition task using motion capture (MOCAP) sequences. We find that semi-supervised feature learning can improve state-of-the-art movement classification accuracy on the HDM05 action dataset. Further, we find that even when using only labeled data and a primarily discriminative objective the addition of a reconstructive decoder can serve as a form of regularization that reduces over-fitting and improves test set accuracy.", "title": "" }, { "docid": "a458f7a0aabee005db091e6b527032b9", "text": "Formal verification has seen much success in several domains of hardware and software design. For example, in hardware verification there has been much work in the verification of microprocessors (e.g. [1]) and memory systems (e.g. [2]). Similarly, software verification has seen success in device-drivers (e.g. [3]) and concurrent software (e.g. [4]). The area of network verification, which consists of both hardware and software components, has received relatively less attention. Traditionally, the focus in this domain has been on performance and security, with less emphasis on functional correctness. However, increasing complexity is resulting in increasing functional failures and thus prompting interest in verification of key correctness properties. This paper reviews the formal verification techniques that have been used here thus far, with the goal of understanding the characteristics of the problem domain that are helpful for each of the techniques, as well as those that pose specific challenges. Finally, it highlights some interesting research challenges that need to be addressed in this important emerging domain.", "title": "" }, { "docid": "55ea00ff6c707aed1342938784ac00f8", "text": "The i.Drive Lab has developed inter-disciplinary methodology for the analysis and modelling of behavioral and physiological responses related to the interaction between driver, vehicle, infrastructure, and virtual environment. The present research outlines the development of a validation study for the combination of virtual and real-life research methodologies. i.Drive driving simulator was set up to replicate the data acquisition of environmental and physiological information coming from an equipped i.Drive electric vehicle with same sensors. i.Drive tests are focused on the identification of driver's affective states that are able to define recurring situations and psychophysical conditions that are relevant for road-safety and drivers' comfort. Results show that it is possible to combine different research paradigms to collect low-level vehicle control behavior and higher-level cognitive measures, in order to develop data collection and elaboration for future mobility challenges.", "title": "" }, { "docid": "6c7bf63f9394bf5432f67b5e554743ae", "text": "419 INTRODUCTION A team from APL has been using model-based systems engineering (MBSE) methods within a conceptual modeling process to support and unify activities related to system-of-systems architecture development; modeling, simulation, and analysis efforts; and system capability trade studies. These techniques have been applied to support analysis of complex systems, particularly in the net-centric operations and warfare domain, which has proven particularly challenging to the modeling, simulation, and analysis community because of its complexity, information richness, and broad scope. In particular, the APL team has used MBSE techniques to provide structured models of complex systems incorporating input from multiple diverse stakeholders odel-based systems engineering techniques facilitate complex system design and documentation processes. A rigorous, iterative conceptual development process based on the Unified Modeling Language (UML) or the Systems Modeling Language (SysML) and consisting of domain modeling, use case development, and behavioral and structural modeling supports design, architecting, analysis, modeling and simulation, test and evaluation, and program management activities. The resulting model is more useful than traditional documentation because it represents structure, data, and functions, along with associated documentation, in a multidimensional, navigable format. Beyond benefits to project documentation and stakeholder communication, UMLand SysML-based models also support direct analysis methods, such as functional thread extraction. The APL team is continuing to develop analysis techniques using conceptual models to reduce the risk of design and test errors, reduce costs, and improve the quality of analysis and supporting modeling and simulation activities in the development of complex systems. Model-Based Systems Engineering in Support of Complex Systems Development", "title": "" }, { "docid": "b6da971f13c1075ce1b4aca303e7393f", "text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.", "title": "" }, { "docid": "b41c0a4e2a312d74d9a244e01fc76d66", "text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.", "title": "" }, { "docid": "1c90adf8ec68ff52e777b2041f8bf4c4", "text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "ef785a3eadaa01a7b45d978f63583513", "text": "This paper presents a laparoscopic grasping tool for minimally invasive surgery with the capability of multiaxis force sensing. The tool is able to sense three-axis Cartesian manipulation force and a single-axis grasping force. The forces are measured by a wrist force sensor located at the distal end of the tool, and two torque sensors at the tool base, respectively. We propose an innovative design of a miniature force sensor achieving structural simplicity and potential cost effectiveness. A prototype is manufactured and experiments are conducted in a simulated surgical environment by using an open platform for surgical robot research, called Raven-II.", "title": "" }, { "docid": "d569902303b93274baf89527e666adc0", "text": "We present a novel sparse representation based approach for the restoration of clipped audio signals. In the proposed approach, the clipped signal is decomposed into overlapping frames and the declipping problem is formulated as an inverse problem, per audio frame. This problem is further solved by a constrained matching pursuit algorithm, that exploits the sign pattern of the clipped samples and their maximal absolute value. Performance evaluation with a collection of music and speech signals demonstrate superior results compared to existing algorithms, over a wide range of clipping levels.", "title": "" }, { "docid": "23e6d97e1b7b224daf72efc254939d0c", "text": "In this study, the effects of ploidy level and culture medium were studied on the production of tropane alkaloids. We have successfully produced stable tetraploid hairy root lines of Hyoscyamus muticus and their ploidy stability was confirmed 30 months after transformation. Tetraploidy affected the growth rate and alkaloid accumulation in plants and transformed root cultures of Egyptian henbane. Although tetraploid plants could produce 200% higher scopolamine than their diploid counterparts, this result was not observed for corresponding induced hairy root cultures. Culture conditions did not only play an important role for biomass production, but also significantly affected tropane alkaloid accumulation in hairy root cultures. In spite of its lower biomass production, tetraploid clone could produce more scopolamine than the diploid counterpart under similar growth conditions. The highest yields of scopolamine (13.87 mg l−1) and hyoscyamine (107.7 mg 1−1) were obtained when diploid clones were grown on medium consisting of either Murashige and Skoog with 60 g/l sucrose or Gamborg’s B5 with 40 g/l sucrose, respectively. Although the hyoscyamine is the main alkaloid in the H. muticus plants, manipulation of ploidy level and culture conditions successfully changed the scopolamine/hyoscyamine ratio towards scopolamine. The fact that hyoscyamine is converted to scopolamine is very important due to the higher market value of scopolamine.", "title": "" }, { "docid": "d9b7636d566d82f9714272f1c9f83f2f", "text": "OBJECTIVE\nFew studies have investigated the association between religion and suicide either in terms of Durkheim's social integration hypothesis or the hypothesis of the regulative benefits of religion. The relationship between religion and suicide attempts has received even less attention.\n\n\nMETHOD\nDepressed inpatients (N=371) who reported belonging to one specific religion or described themselves as having no religious affiliation were compared in terms of their demographic and clinical characteristics.\n\n\nRESULTS\nReligiously unaffiliated subjects had significantly more lifetime suicide attempts and more first-degree relatives who committed suicide than subjects who endorsed a religious affiliation. Unaffiliated subjects were younger, less often married, less often had children, and had less contact with family members. Furthermore, subjects with no religious affiliation perceived fewer reasons for living, particularly fewer moral objections to suicide. In terms of clinical characteristics, religiously unaffiliated subjects had more lifetime impulsivity, aggression, and past substance use disorder. No differences in the level of subjective and objective depression, hopelessness, or stressful life events were found.\n\n\nCONCLUSIONS\nReligious affiliation is associated with less suicidal behavior in depressed inpatients. After other factors were controlled, it was found that greater moral objections to suicide and lower aggression level in religiously affiliated subjects may function as protective factors against suicide attempts. Further study about the influence of religious affiliation on aggressive behavior and how moral objections can reduce the probability of acting on suicidal thoughts may offer new therapeutic strategies in suicide prevention.", "title": "" } ]
scidocsrr
0beab3e99259c697748456cbf8ea89ec
Depth Estimation from Image Structure
[ { "docid": "9bf157e016f4fc124128a3008dc1c47c", "text": "The appearance of an object is composed of local structure. This local structure can be described and characterized by a vector of local features measured by local operators such as Gaussian derivatives or Gabor filters. This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators. As such, this represents a new class of appearance based techniques for computer vision. Based on joint statistics, the paper develops techniques for the identification of multiple objects at arbitrary positions and orientations in a cluttered scene. Experiments show that these techniques can identify over 100 objects in the presence of major occlusions. Most remarkably, the techniques have low complexity and therefore run in real-time.", "title": "" } ]
[ { "docid": "e6f506c3c90a15b5e4079ccb75eb3ff0", "text": "Stories of people's everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "392f7b126431b202d57d6c25c07f7f7c", "text": "Serine racemase (SRace) is an enzyme that catalyzes the conversion of L-serine to pyruvate or D-serine, an endogenous agonist for NMDA receptors. Our previous studies showed that inflammatory stimuli such as Abeta could elevate steady-state mRNA levels for SRace, perhaps leading to inappropriate glutamatergic stimulation under conditions of inflammation. We report here that a proinflammatory stimulus (lipopolysaccharide) elevated the activity of the human SRace promoter, as indicated by expression of a luciferase reporter system transfected into a microglial cell line. This effect corresponded to an elevation of SRace protein levels in microglia, as well. By contrast, dexamethasone inhibited the SRace promoter activity and led to an apparent suppression of SRace steady-state mRNA levels. A potential binding site for NFkappaB was explored, but this sequence played no significant role in SRace promoter activation. Instead, large deletions and site-directed mutagenesis indicated that a DNA element between -1382 and -1373 (relative to the start of translation) was responsible for the activation of the promoter by lipopolysaccharide. This region fits the consensus for an activator protein-1 binding site. Lipopolysaccharide induced an activity capable of binding this DNA element in electrophoretic mobility shift assays. Supershifts with antibodies against c-Fos and JunB identified these as the responsible proteins. An inhibitor of Jun N-terminal kinase blocked SRace promoter activation, further implicating activator protein-1. These data indicate that proinflammatory stimuli utilize a signal transduction pathway culminating in activator protein-1 activation to induce expression of serine racemase.", "title": "" }, { "docid": "333b21433d17a9d271868e203c8a9481", "text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).", "title": "" }, { "docid": "4cd36ace8473aeaa61ced34b548c6585", "text": "OBJECTIVE\nSmaller hippocampal volume has been reported only in some but not all studies of unipolar major depressive disorder. Severe stress early in life has also been associated with smaller hippocampal volume and with persistent changes in the hypothalamic-pituitary-adrenal axis. However, prior hippocampal morphometric studies in depressed patients have neither reported nor controlled for a history of early childhood trauma. In this study, the volumes of the hippocampus and of control brain regions were measured in depressed women with and without childhood abuse and in healthy nonabused comparison subjects.\n\n\nMETHOD\nStudy participants were 32 women with current unipolar major depressive disorder-21 with a history of prepubertal physical and/or sexual abuse and 11 without a history of prepubertal abuse-and 14 healthy nonabused female volunteers. The volumes of the whole hippocampus, temporal lobe, and whole brain were measured on coronal MRI scans by a single rater who was blind to the subjects' diagnoses.\n\n\nRESULTS\nThe depressed subjects with childhood abuse had an 18% smaller mean left hippocampal volume than the nonabused depressed subjects and a 15% smaller mean left hippocampal volume than the healthy subjects. Right hippocampal volume was similar across the three groups. The right and left hippocampal volumes in the depressed women without abuse were similar to those in the healthy subjects.\n\n\nCONCLUSIONS\nA smaller hippocampal volume in adult women with major depressive disorder was observed exclusively in those who had a history of severe and prolonged physical and/or sexual abuse in childhood. An unreported history of childhood abuse in depressed subjects could in part explain the inconsistencies in hippocampal volume findings in prior studies in major depressive disorder.", "title": "" }, { "docid": "e7646a79b25b2968c3c5b668d0216aa6", "text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.", "title": "" }, { "docid": "8999e010ddbc0aa7ef579d8a9e055769", "text": "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute-/memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "title": "" }, { "docid": "19c3c2ac5e35e8e523d796cef3717d90", "text": "The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.", "title": "" }, { "docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434", "text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).", "title": "" }, { "docid": "144480a9154226cf4a72f149ff6c9c56", "text": "The availability of medical imaging data from clinical archives, research literature, and clinical manuals, coupled with recent advances in computer vision offer the opportunity for image-based diagnosis, teaching, and biomedical research. However, the content and semantics of an image can vary depending on its modality and as such the identification of image modality is an important preliminary step. The key challenge for automatically classifying the modality of a medical image is due to the visual characteristics of different modalities: some are visually distinct while others may have only subtle differences. This challenge is compounded by variations in the appearance of images based on the diseases depicted and a lack of sufficient training data for some modalities. In this paper, we introduce a new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures. CNNs are a state-of-the-art image classification technique that learns the optimal image features for a given classification task. We hypothesise that different CNN architectures learn different levels of semantic image representation and thus an ensemble of CNNs will enable higher quality features to be extracted. Our method develops a new feature extractor by fine-tuning CNNs that have been initialized on a large dataset of natural images. The fine-tuning process leverages the generic image features from natural images that are fundamental for all images and optimizes them for the variety of medical imaging modalities. These features are used to train numerous multiclass classifiers whose posterior probabilities are fused to predict the modalities of unseen images. Our experiments on the ImageCLEF 2016 medical image public dataset (30 modalities; 6776 training images, and 4166 test images) show that our ensemble of fine-tuned CNNs achieves a higher accuracy than established CNNs. Our ensemble also achieves a higher accuracy than methods in the literature evaluated on the same benchmark dataset and is only overtaken by those methods that source additional training data.", "title": "" }, { "docid": "d17622889db09b8484d94392cadf1d78", "text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.", "title": "" }, { "docid": "46004ee1f126c8a5b76166c5dc081bc8", "text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.", "title": "" }, { "docid": "d8badd23313c7ea4baa0231ff1b44e32", "text": "Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.", "title": "" }, { "docid": "53575c45a60f93c850206f2a467bc8e8", "text": "We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb.", "title": "" }, { "docid": "c3e371b0c13f431cbf9b9278a6d40ace", "text": "Until today, most lecturers in universities are found still using the conventional methods of taking students' attendance either by calling out the student names or by passing around an attendance sheet for students to sign confirming their presence. In addition to the time-consuming issue, such method is also at higher risk of having students cheating about their attendance, especially in a large classroom. Therefore a method of taking attendance by employing an application running on the Android platform is proposed in this paper. This application, once installed can be used to download the students list from a designated web server. Based on the downloaded list of students, the device will then act like a scanner to scan each of the student cards one by one to confirm and verify the student's presence. The device's camera will be used as a sensor that will read the barcode printed on the students' cards. The updated attendance list is then uploaded to an online database and can also be saved as a file to be transferred to a PC later on. This system will help to eliminate the current problems, while also promoting a paperless environment at the same time. Since this application can be deployed on lecturers' own existing Android devices, no additional hardware cost is required.", "title": "" }, { "docid": "2c3e6373feb4352a68ec6fd109df66e0", "text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.", "title": "" }, { "docid": "7c783834f6ad0151f944766a91f0a67d", "text": "Estradiol is the most potent and ubiquitous member of a class of steroid hormones called estrogens. Fetuses and newborns are exposed to estradiol derived from their mother, their own gonads, and synthesized locally in their brains. Receptors for estradiol are nuclear transcription factors that regulate gene expression but also have actions at the membrane, including activation of signal transduction pathways. The developing brain expresses high levels of receptors for estradiol. The actions of estradiol on developing brain are generally permanent and range from establishment of sex differences to pervasive trophic and neuroprotective effects. Cellular end points mediated by estradiol include the following: 1) apoptosis, with estradiol preventing it in some regions but promoting it in others; 2) synaptogenesis, again estradiol promotes in some regions and inhibits in others; and 3) morphometry of neurons and astrocytes. Estradiol also impacts cellular physiology by modulating calcium handling, immediate-early-gene expression, and kinase activity. The specific mechanisms of estradiol action permanently impacting the brain are regionally specific and often involve neuronal/glial cross-talk. The introduction of endocrine disrupting compounds into the environment that mimic or alter the actions of estradiol has generated considerable concern, and the developing brain is a particularly sensitive target. Prostaglandins, glutamate, GABA, granulin, and focal adhesion kinase are among the signaling molecules co-opted by estradiol to differentiate male from female brains, but much remains to be learned. Only by understanding completely the mechanisms and impact of estradiol action on the developing brain can we also understand when these processes go awry.", "title": "" }, { "docid": "2ae96a524ba3b6c43ea6bfa112f71a30", "text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.", "title": "" }, { "docid": "fd786ae1792e559352c75940d84600af", "text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "fad4ff82e9b11f28a70749d04dfbf8ca", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.", "title": "" } ]
scidocsrr
46046b1d727162cdcd8c16be79005ed7
Leimu : Gloveless Music Interaction Using a Wrist Mounted Leap Motion
[ { "docid": "619e3893a731ffd0ed78c9dd386a1dff", "text": "The introduction of new gesture interfaces has been expanding the possibilities of creating new Digital Musical Instruments (DMIs). Leap Motion Controller was recently launched promising fine-grained hand sensor capabilities. This paper proposes a preliminary study and evaluation of this new sensor for building new DMIs. Here, we list a series of gestures, recognized by the device, which could be theoretically used for playing a large number of musical instruments. Then, we present an analysis of precision and latency of these gestures as well as a first case study integrating Leap Motion with a virtual music keyboard.", "title": "" } ]
[ { "docid": "23e5520226bc76f67d0a1e9ef98a4bb2", "text": "This report analyzes the modelling of default intensities and probabilities in single-firm reduced-form models, and reviews the three main approaches to incorporating default dependencies within the framework of reduced models. The first approach, the conditionally independent defaults (CID), introduces credit risk dependence between firms through the dependence of the firms’ intensity processes on a common set of state variables. Contagion models extend the CID approach to account for the empirical observation of default clustering. There exist periods in which the firms’ credit risk is increased and in which the majority of the defaults take place. Finally, default dependencies can also be accounted for using copula functions. The copula approach takes as given the marginal default probabilities of the different firms and plugs them into a copula function, which provides the model with the default dependence structure. After a description of copulas, we present two different approaches of using copula functions in intensity models, and discuss the issues of the choice and calibration of the copula function. ∗This report is a revised version of the Master’s Thesis presented in partial fulfillment of the 2002-2003 MSc in Financial Mathematics at King’s College London. I thank my supervisor Lane P. Hughston and everyone at the Financial Mathematics Group at King’s College, particularly Giulia Iori and Mihail Zervos. Financial support by Banco de España is gratefully acknowledged. Any errors are the exclusive responsibility of the author. CEMFI, Casado del Alisal 5, 28014 Madrid, Spain. Email: elizalde@cemfi.es.", "title": "" }, { "docid": "32287cfcf9978e04bea4ab5f01a6f5da", "text": "OBJECTIVE\nThe purpose of this study was to examine the relationship of performance on the Developmental Test of Visual-Motor Integration (VMI; Beery, 1997) to handwriting legibility in children attending kindergarten. The relationship of using lined versus unlined paper on letter legibility, based on a modified version of the Scale of Children's Readiness in PrinTing (Modified SCRIPT; Weil & Cunningham Amundson, 1994) was also investigated.\n\n\nMETHOD\nFifty-four typically developing kindergarten students were administered the VMI; 30 students completed the Modified SCRIPT with unlined paper, 24 students completed the Modified SCRIPT with lined paper. Students were assessed in the first quarter of the kindergarten school year and scores were analyzed using correlational and nonparametric statistical measures.\n\n\nRESULTS\nStrong positive relationships were found between VMI assessment scores and student's ability to legibly copy letterforms. Students who could copy the first nine forms on the VMI performed significantly better than students who could not correctly copy the first nine VMI forms on both versions of the Modified SCRIPT.\n\n\nCONCLUSION\nVisual-motor integration skills were shown to be related to the ability to copy letters legibly. These findings support the research of Weil and Cunningham Amundson. Findings from this study also support the conclusion that there is no significant difference in letter writing legibility between students who use paper with or without lines.", "title": "" }, { "docid": "fbf57d773bcdd8096e77246b3f785a96", "text": "The explosion of online content has made the management of such content non-trivial. Web-related tasks such as web page categorization, news filtering, query categorization, tag recommendation, etc. often involve the construction of multi-label categorization systems on a large scale. Existing multi-label classification methods either do not scale or have unsatisfactory performance. In this work, we propose MetaLabeler to automatically determine the relevant set of labels for each instance without intensive human involvement or expensive cross-validation. Extensive experiments conducted on benchmark data show that the MetaLabeler tends to outperform existing methods. Moreover, MetaLabeler scales to millions of multi-labeled instances and can be deployed easily. This enables us to apply the MetaLabeler to a large scale query categorization problem in Yahoo!, yielding a significant improvement in performance.", "title": "" }, { "docid": "e2880e705775f865486ad6f60dfbebb4", "text": "The relationship between persistent pain and self-directed, non-reactive awareness of present-moment experience (i.e., mindfulness) was explored in one of the dominant psychological theories of chronic pain - the fear-avoidance model[53]. A heterogeneous sample of 104 chronic pain outpatients at a multidisciplinary pain clinic in Australia completed psychometrically sound self-report measures of major variables in this model: Pain intensity, negative affect, pain catastrophizing, pain-related fear, pain hypervigilance, and functional disability. Two measures of mindfulness were also used, the Mindful Attention Awareness Scale [4] and the Five-Factor Mindfulness Questionnaire [1]. Results showed that mindfulness significantly negatively predicts each of these variables, accounting for 17-41% of their variance. Hierarchical multiple regression analysis showed that mindfulness uniquely predicts pain catastrophizing when other variables are controlled, and moderates the relationship between pain intensity and pain catastrophizing. This is the first clear evidence substantiating the strong link between mindfulness and pain catastrophizing, and suggests mindfulness might be added to the fear-avoidance model. Implications for the clinical use of mindfulness in screening and intervention are discussed.", "title": "" }, { "docid": "f5d8c506c9f25bff429cea1ed4c84089", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "144c11393bef345c67595661b5b20772", "text": "BACKGROUND\nAppropriate placement of the bispectral index (BIS)-vista montage for frontal approach neurosurgical procedures is a neuromonitoring challenge. The standard bifrontal application interferes with the operative field; yet to date, no other placements have demonstrated good agreement. The purpose of our study was to compare the standard BIS montage with an alternate BIS montage across the nasal dorsum for neuromonitoring.\n\n\nMATERIALS AND METHODS\nThe authors performed a prospective study, enrolling patients and performing neuromonitoring using both the standard and the alternative montage on each patient. Data from the 2 placements were compared and analyzed using a Bland-Altman analysis, a Scatter plot analysis, and a matched-pair analysis.\n\n\nRESULTS\nOverall, 2567 minutes of data from each montage was collected on 28 subjects. Comparing the overall difference in score, the alternate BIS montage score was, on average, 2.0 (6.2) greater than the standard BIS montage score (P<0.0001). The Bland-Altman analysis revealed a difference in score of -2.0 (95% confidence interval, -14.1, 10.1), with 108/2567 (4.2%) of the values lying outside of the limit of agreement. The scatter plot analysis overall produced a trend line with the equation y=0.94x+0.82, with an R coefficient of 0.82.\n\n\nCONCLUSIONS\nWe determined that the nasal montage produces values that have slightly more variability compared with that ideally desired, but the variability is not clinically significant. In cases where the standard BIS-vista montage would interfere with the operative field, an alternative positioning of the BIS montage across the nasal bridge and under the eye can be used.", "title": "" }, { "docid": "980dc3d4b01caac3bf56df039d5ca513", "text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.", "title": "" }, { "docid": "5faef1f7afae4ccb3a701a11f60ac80b", "text": "State of the art deep learning models have made steady progress in the fields of computer vision and natural language processing, at the expense of growing model sizes and computational complexity. Deploying these models on low power and mobile devices poses a challenge due to their limited compute capabilities and strict energy budgets. One solution that has generated significant research interest is deploying highly quantized models that operate on low precision inputs and weights less than eight bits, trading off accuracy for performance. These models have a significantly reduced memory footprint (up to 32x reduction) and can replace multiply-accumulates with bitwise operations during compute intensive convolution and fully connected layers. Most deep learning frameworks rely on highly engineered linear algebra libraries such as ATLAS or Intel’s MKL to implement efficient deep learning operators. To date, none of the popular deep learning directly support low precision operators, partly due to a lack of optimized low precision libraries. In this paper we introduce a work flow to quickly generate high performance low precision deep learning operators for arbitrary precision that target multiple CPU architectures and include optimizations such as memory tiling and vectorization. We present an extensive case study on low power ARM Cortex-A53 CPU, and show how we can generate 1-bit, 2-bit convolutions with speedups up to 16x over an optimized 16-bit integer baseline and 2.3x better than handwritten implementations.", "title": "" }, { "docid": "754dc26aa595c2c759a34540af369eac", "text": "In recent years, the increasing popularity of outsourcing data to third-party cloud servers sparked a major concern towards data breaches. A standard measure to thwart this problem and to ensure data confidentiality is data encryption. Nevertheless, organizations that use traditional encryption techniques face the challenge of how to enable untrusted cloud servers perform search operations while the actually outsourced data remains confidential. Searchable encryption is a powerful tool that attempts to solve the challenge of querying data outsourced at untrusted servers while preserving data confidentiality. Whereas the literature mainly considers searching over an unstructured collection of files, this paper explores methods to execute SQL queries over encrypted databases. We provide a complete framework that supports private search queries over encrypted SQL databases, in particular for PostgreSQL and MySQL databases. We extend the solution for searchable encryption designed by Curtmola et al., to the case of SQL databases. We also provide features for evaluating range and boolean queries. We finally propose a framework for implementing our construction, validating its", "title": "" }, { "docid": "c57a8e7e15d6b216e451c77fafce271a", "text": "We study rank aggregation algorithms that take as input the opinions of players over their peers, represented as rankings, and output a social ordering of the players (which reflects, e.g., relative contribution to a project or fit for a job). To prevent strategic behavior, these algorithms must be impartial, i.e., players should not be able to influence their own position in the output ranking. We design several randomized algorithms that are impartial and closely emulate given (nonimpartial) rank aggregation rules in a rigorous sense. Experimental results further support the efficacy and practicability of our algorithms.", "title": "" }, { "docid": "26cd7a502fcbf2455b58365299dc8432", "text": "Derivative traders are usually required to scan through hundreds, even thousands of possible trades on a daily-basis; a concrete case is the so-called Mid-Curve Calendar Spread (MCCS). The actual procedure in place is full of pitfalls and a more systematic approach where more information at hand is crossed and aggregated to find good trading picks can be highly useful and undoubtedly increase the trader’s productivity. Therefore, in this work we propose an MCCS Recommendation System based on a stacking approach through Neural Networks. In order to suggest that such approach is methodologically and computationally feasible, we used a list of 15 different types of US Dollar MCCSs regarding expiration, forward and swap tenure. For each MCCS, we used 10 years of historical data ranging weekly from Sep/06 to Sep/16. Then, we started the modelling stage by: (i) fitting the base learners using as the input sensitivity metrics linked with the MCCS at time t, and its subsequent annualized returns as the output; (ii) feeding the prediction from each base model to a particular stacker; and (iii) making predictions and comparing different modelling methodologies by a set of performance metrics and benchmarks. After establishing a backtesting engine and setting performance metrics, our results suggest that our proposed Neural Network stacker compared favourably to other combination procedures.", "title": "" }, { "docid": "354500ae7e1ad1c6fd09438b26e70cb0", "text": "Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.", "title": "" }, { "docid": "cac3d6893f1d311e0014b1afa22d903b", "text": "Canny algorithm can be used in extracting the object’s contour clearly by setting the appropriate parameters. The Otsu algorithm can calculate the high threshold value which is significant to the Canny algorithm, and then this threshold value can be used in the Canny algorithm to detect the object’s edge. From the exprimental result, the Otsu algorithm can be applied in choosing the threshold value which can be used in Canny algorithm, and this method improves the effect of extracting the edge of the Canny algorithm, and achieves the expect result finally.", "title": "" }, { "docid": "6c149f1f6e9dc859bf823679df175afb", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "9c16bf2fb7ceba2bf872ca3d1475c6d9", "text": "Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.", "title": "" }, { "docid": "0cbc2eb794f44b178a54d97aeff69c19", "text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.", "title": "" }, { "docid": "0f66b62ddfd89237bb62fb6b60a7551a", "text": "BACKGROUND\nClinicians' expanding use of cosmetic restorative procedures has generated greater interest in the determination of esthetic guidelines and standards. The overall esthetic impact of a smile can be divided into four specific areas: gingival esthetics, facial esthetics, microesthetics and macroesthetics. In this article, the authors focus on the principles of macroesthetics, which represents the relationships and ratios of relating multiple teeth to each other, to soft tissue and to facial characteristics.\n\n\nCASE DESCRIPTION\nThe authors categorize macroesthetic criteria based on two reference points: the facial midline and the amount and position of tooth reveal. The facial midline is a critical reference position for determining multiple design criteria. The amount and position of tooth reveal in various views and lip configurations also provide valuable guidelines in determining esthetic tooth positions and relationships.\n\n\nCLINICAL IMPLICATIONS\nEsthetics is an inherently subjective discipline. By understanding and applying simple esthetic rules, tools and strategies, dentists have a basis for evaluating natural dentitions and the results of cosmetic restorative procedures. Macroesthetic components of teeth and their relationship to each other can be influenced to produce more natural and esthetically pleasing restorative care.", "title": "" }, { "docid": "624d645054e730855eed9001e4c4bbc4", "text": "In this paper, we argue that some tasks (e.g., meeting support) require more flexible hypermedia systems and we describe a prototype hypermedia system, DOLPHIN, that implements more flexibility. As part of the argument, we present a theoretical design space for information structuring systems and locate existing hypertext systems within it. The dimensions of the space highlight a system's internal representation of structure and the user's actions in creating structure. Second, we describe an empirically derived range of activities connected to conducting group meetings, including the pre- and post-preparation phases, and argue that hyptetext systems need to be more flexible in order to support this range of activities. Finally, we describe a hypermedia prototype, DOLPHIN, which implements this kind of flexible support for meetings. DOLPHIN supports different degrees of formality (e.g., handwriting and sketches as well as typed nodes and links are supported), coexistence of different structures (e.g., handwriting and nodes can exist on the same page) and mutual transformations between them (e.g., handwriting can be turned into nodes and vice versa).", "title": "" }, { "docid": "745562de56499ff0030f35afa8d84b7f", "text": "This paper will show how the accuracy and security of SCADA systems can be improved by using anomaly detection to identify bad values caused by attacks and faults. The performance of invariant induction and ngram anomaly-detectors will be compared and this paper will also outline plans for taking this work further by integrating the output from several anomalydetecting techniques using Bayesian networks. Although the methods outlined in this paper are illustrated using the data from an electricity network, this research springs from a more general attempt to improve the security and dependability of SCADA systems using anomaly detection.", "title": "" }, { "docid": "c24156b6c9b8f5c04fe40e1c6814d115", "text": "This paper presents a compact SIW (substrate integrated waveguide) 3×3 Butler matrix (BM) for 5G mobile applications. The detailed structuring procedures, parameter determinations of each involved component are provided. To validate the 3×3 BM, a slot array is designed. The cascading simulations and prototype measurements are also carried out. The overall performance and dimension show that it can be used for 5G mobile devices. The measured S-parameters agree well with the simulated ones. The measured gains are in the range of 8.1 dBi ∼ 11.1 dBi, 7.1 dBi ∼ 9.8 dBi and 8.9 dBi ∼ 11 dBi for port 1∼3 excitations.", "title": "" } ]
scidocsrr
3ad0cd7dc7167ddcfee192b8d413736b
Geometric Loss Functions for Camera Pose Regression with Deep Learning
[ { "docid": "5c62f66d948f15cea55c1d2c9d10f229", "text": "This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.", "title": "" }, { "docid": "acefbbb42607f2d478a16448644bd6e6", "text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.", "title": "" } ]
[ { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "e82e4599a7734c9b0292a32f551dd411", "text": "Generating a text abstract from a set of documents remains a challenging task. The neural encoder-decoder framework has recently been exploited to summarize single documents, but its success can in part be attributed to the availability of large parallel data automatically acquired from the Web. In contrast, parallel data for multi-document summarization are scarce and costly to obtain. There is a pressing need to adapt an encoder-decoder model trained on single-document summarization data to work with multiple-document input. In this paper, we present an initial investigation into a novel adaptation method. It exploits the maximal marginal relevance method to select representative sentences from multi-document input, and leverages an abstractive encoder-decoder model to fuse disparate sentences to an abstractive summary. The adaptation method is robust and itself requires no training data. Our system compares favorably to state-of-the-art extractive and abstractive approaches judged by automatic metrics and human assessors.", "title": "" }, { "docid": "2ddc4919771402dabedd2020649d1938", "text": "Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations", "title": "" }, { "docid": "a4c17b823d325ed5f339f78cd4d1e9ab", "text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.", "title": "" }, { "docid": "308effb16ccec5e315da4d02119080d0", "text": "In this paper, we describe a method to photogrammetrically estimate the intrinsic and extrinsic parameters of fish-eye cameras using the properties of equidistance perspective, particularly vanishing point estimation, with the aim of providing a rectified image for scene viewing applications. The estimated intrinsic parameters are the optical center and the fish-eye lensing parameter, and the extrinsic parameters are the rotations about the world axes relative to the checkerboard calibration diagram.", "title": "" }, { "docid": "a38eef36ae38baf83c55262fbdd26278", "text": "An electrochemical sensor based on the electrocatalytic activity of functionalized graphene for sensitive detection of paracetamol is presented. The electrochemical behaviors of paracetamol on graphene-modified glassy carbon electrodes (GCEs) were investigated by cyclic voltammetry and square-wave voltammetry. The results showed that the graphene-modified electrode exhibited excellent electrocatalytic activity to paracetamol. A quasi-reversible redox process of paracetamol at the modified electrode was obtained, and the over-potential of paracetamol decreased significantly compared with that at the bare GCE. Such electrocatalytic behavior of graphene is attributed to its unique physical and chemical properties, e.g., subtle electronic characteristics, attractive pi-pi interaction, and strong adsorptive capability. This electrochemical sensor shows an excellent performance for detecting paracetamol with a detection limit of 3.2x10(-8)M, a reproducibility of 5.2% relative standard deviation, and a satisfied recovery from 96.4% to 103.3%. The sensor shows great promise for simple, sensitive, and quantitative detection and screening of paracetamol.", "title": "" }, { "docid": "1fb13cda340d685289f1863bb2bfd62b", "text": "1 Assistant Professor, Department of Prosthodontics, Ibn-e-Siena Hospital and Research Institute, Multan Medical and Dental College, Multan, Pakistan 2 Assistant Professor, Department of Prosthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia 3 Head Department of Prosthodontics, Armed Forces Institute of Dentistry, Rawalpindi, Pakistan For Correspondence: Dr Salman Ahmad, House No 10, Street No 2, Gulshan Sakhi Sultan Colony, Surej Miani Road, Multan, Pakistan. Email: drsalman21@gmail.com. Cell: 0300–8732017 INTRODUCTION", "title": "" }, { "docid": "332bcd9b49f3551d8f07e4f21a881804", "text": "Attention plays a critical role in effective learning. By means of attention assessment, it helps learners improve and review their learning processes, and even discover Attention Deficit Hyperactivity Disorder (ADHD). Hence, this work employs modified smart glasses which have an inward facing camera for eye tracking, and an inertial measurement unit for head pose estimation. The proposed attention estimation system consists of eye movement detection, head pose estimation, and machine learning. In eye movement detection, the central point of the iris is found by the locally maximum curve via the Hough transform where the region of interest is derived by the identified left and right eye corners. The head pose estimation is based on the captured inertial data to generate physical features for machine learning. Here, the machine learning adopts Genetic Algorithm (GA)-Support Vector Machine (SVM) where the feature selection of Sequential Floating Forward Selection (SFFS) is employed to determine adequate features, and GA is to optimize the parameters of SVM. Our experiments reveal that the proposed attention estimation system can achieve the accuracy of 93.1% which is fairly good as compared to the conventional systems. Therefore, the proposed system embedded in smart glasses brings users mobile, convenient, and comfortable to assess their attention on learning or medical symptom checker.", "title": "" }, { "docid": "7ee5886ae2df12f12d65f5080561ecc6", "text": "Sliding mode control schemes of the static and dynamic types are proposed for the control of a magnetic levitation system. The proposed controllers guarantee the asymptotic regulation of the states of the system to their desired values. Simulation results of the proposed controllers are given to illustrate the effectiveness of them. Robustness of the control schemes to changes in the parameters of the system is also investigated.", "title": "" }, { "docid": "a604527951768b088fe2e40104fa78bb", "text": "In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It’s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor’s expertise and experience.But still cases are reported of wrong diagnosis and treatment.Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%. Keywords—Data mining , classification , Parkinson disease , Artificial neural networks , Feature Selection , Information Gain", "title": "" }, { "docid": "3c80aa753cac4bebd8c6808a361973c7", "text": "We develop a computer-assisted method for the discovery of insightful conceptualizations, in the form of clusterings (i.e., partitions) of input objects. Each of the numerous fully automated methods of cluster analysis proposed in statistics, computer science, and biology optimize a different objective function. Almost all are well defined, but how to determine before the fact which one, if any, will partition a given set of objects in an \"insightful\" or \"useful\" way for a given user is unknown and difficult, if not logically impossible. We develop a metric space of partitions from all existing cluster analysis methods applied to a given dataset (along with millions of other solutions we add based on combinations of existing clusterings) and enable a user to explore and interact with it and quickly reveal or prompt useful or insightful conceptualizations. In addition, although it is uncommon to do so in unsupervised learning problems, we offer and implement evaluation designs that make our computer-assisted approach vulnerable to being proven suboptimal in specific data types. We demonstrate that our approach facilitates more efficient and insightful discovery of useful information than expert human coders or many existing fully automated methods.", "title": "" }, { "docid": "b4284204ae7d9ef39091a651583b3450", "text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.", "title": "" }, { "docid": "f42648f411cbf7e31940acf81bf107d0", "text": "Observing that the creation of certain types of artistic artifacts necessitate intelligence, we present the Lovelace 2.0 Test of creativity as an alternative to the Turing Test as a means of determining whether an agent is intelligent. The Lovelace 2.0 Test builds off prior tests of creativity and additionally provides a means of directly comparing the relative intelligence of different agents.", "title": "" }, { "docid": "bac01694d6b578b5b873d5de131cb844", "text": "The methylotrophic yeast Komagataella phaffii (Pichia pastoris) has been developed into a highly successful system for heterologous protein expression in both academia and industry. However, overexpression of recombinant protein often leads to severe burden on the physiology of K. phaffii and triggers cellular stress. To elucidate the global effect of protein overexpression, we set out to analyze the differential transcriptome of recombinant strains with 12 copies and a single copy of phospholipase A2 gene (PLA 2) from Streptomyces violaceoruber. Through GO, KEGG and heat map analysis of significantly differentially expressed genes, the results indicated that the 12-copy strain suffered heavy cellular stress. The genes involved in protein processing and stress response were significantly upregulated due to the burden of protein folding and secretion, while the genes in ribosome and DNA replication were significantly downregulated possibly contributing to the reduced cell growth rate under protein overexpression stress. Three most upregulated heat shock response genes (CPR6, FES1, and STI1) were co-overexpressed in K. phaffii and proved their positive effect on the secretion of reporter enzymes (PLA2 and prolyl endopeptidase) by increasing the production up to 1.41-fold, providing novel helper factors for rational engineering of K. phaffii.", "title": "" }, { "docid": "a608f681a3833d932bf723ca26dfe511", "text": "The purpose of the study was to explore whether personality traits moderate the association between social comparison on Facebook and subjective well-being, measured as both life satisfaction and eudaimonic well-being. Data were collected via an online questionnaire which measured Facebook use, social comparison behavior and personality traits for 337 respondents. The results showed positive associations between Facebook intensity and both measures of subjective well-being, and negative associations between Facebook social comparison and both measures of subjective well-being. Personality traits were assessed by the Reinforcement Sensitivity Theory personality questionnaire, which revealed that Reward Interest was positively associated with eudaimonic well-being, and Goal-Drive Persistence was positively associated with both measures of subjective well-being. Impulsivity was negatively associated with eudaimonic well-being and the Behavioral Inhibition System was negatively associated with both measures of subjective well-being. Interactions between personality traits and social comparison on Facebook indicated that for respondents with high Goal-Drive Persistence, Facebook social comparison had a positive association with eudaimonic well-being, thus confirming that some personality traits moderate the association between Facebook social comparison and subjective well-being. The results of this study highlight how individual differences in personality may impact how social comparison on Facebook affects individuals’ subjective well-being.", "title": "" }, { "docid": "787f95f8c28bfcf14eef486725a25bd2", "text": "BACKGROUND\nThere is a lack of knowledge on the primary and secondary static stabilizing functions of the posterior oblique ligament (POL), the proximal and distal divisions of the superficial medial collateral ligament (sMCL), and the meniscofemoral and meniscotibial portions of the deep medial collateral ligament (MCL).\n\n\nHYPOTHESIS\nIdentification of the primary and secondary stabilizing functions of the individual components of the main medial knee structures will provide increased knowledge of the medial knee ligamentous stability.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-four cadaveric knees were equally divided into 3 groups with unique sequential sectioning sequences of the POL, sMCL (proximal and distal divisions), and deep MCL (meniscofemoral and meniscotibial portions). A 6 degree of freedom electromagnetic tracking system monitored motion after application of valgus loads (10 N.m) and internal and external rotation torques (5 N.m) at 0 degrees , 20 degrees , 30 degrees , 60 degrees , and 90 degrees of knee flexion.\n\n\nRESULTS\nThe primary valgus stabilizer was the proximal division of the sMCL. The primary external rotation stabilizer was the distal division of the sMCL at 30 degrees of knee flexion. The primary internal rotation stabilizers were the POL and the distal division of the sMCL at all tested knee flexion angles, the meniscofemoral portion of the deep MCL at 20 degrees , 60 degrees , and 90 degrees of knee flexion, and the meniscotibial portion of the deep MCL at 0 degrees and 30 degrees of knee flexion.\n\n\nCONCLUSION\nAn intricate relationship exists among the main medial knee structures and their individual components for static function to applied loads.\n\n\nCLINICAL SIGNIFICANCE\nInterpretation of clinical knee motion testing following medial knee injuries will improve with the information in this study. Significant increases in external rotation at 30 degrees of knee flexion were found with all medial knee structures sectioned, which indicates that a positive dial test may be found not only for posterolateral knee injuries but also for medial knee injuries.", "title": "" }, { "docid": "15fddcfa5a9cbf80fe6640c815ca89ea", "text": "Relation extraction is one of the core challenges in automated knowledge base construction. One line of approach for relation extraction is to perform multi-hop reasoning on the paths connecting an entity pair to infer new relations. While these methods have been successfully applied for knowledge base completion, they do not utilize the entity or the entity type information to make predictions. In this work, we incorporate selectional preferences, i.e., relations enforce constraints on the allowed entity types for the candidate entities, to multi-hop relation extraction by including entity type information. We achieve a 17.67% (relative) improvement in MAP score in a relation extraction task when compared to a method that does not use entity type information.", "title": "" }, { "docid": "6ab58e75daf299f3463be4432def87b2", "text": "Less than thirty years after the giant magnetoresistance (GMR) effect was described, GMR sensors are the preferred choice in many applications demanding the measurement of low magnetic fields in small volumes. This rapid deployment from theoretical basis to market and state-of-the-art applications can be explained by the combination of excellent inherent properties with the feasibility of fabrication, allowing the real integration with many other standard technologies. In this paper, we present a review focusing on how this capability of integration has allowed the improvement of the inherent capabilities and, therefore, the range of application of GMR sensors. After briefly describing the phenomenological basis, we deal on the benefits of low temperature deposition techniques regarding the integration of GMR sensors with flexible (plastic) substrates and pre-processed CMOS chips. In this way, the limit of detection can be improved by means of bettering the sensitivity or reducing the noise. We also report on novel fields of application of GMR sensors by the recapitulation of a number of cases of success of their integration with different heterogeneous complementary elements. We finally describe three fully functional systems, two of them in the bio-technology world, as the proof of how the integrability has been instrumental in the meteoric development of GMR sensors and their applications.", "title": "" }, { "docid": "5ae157937813e060a72ecb918d4dc5d1", "text": "Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Beyesian, etc., from sequential chunks of the data stream. The classifiers in the ensemble are judiciously weighted based on their expected classification accuracy on the test data under the time-evolving environment. Thus, the ensemble approach improves both the efficiency in learning the model and the accuracy in performing classification. Our empirical study shows that the proposed methods have substantial advantage over single-classifier approaches in prediction accuracy, and the ensemble framework is effective for a variety of classification models.", "title": "" }, { "docid": "999c1fa41498e8a330dfbd8fdb4c6d6e", "text": "Wellness is a widely popular concept that is commonly applied to fitness and self-help products or services. Inference of personal wellness-related attributes, such as body mass index or diseases tendency, as well as understanding of global dependencies between wellness attributes and users’ behavior is of crucial importance to various applications in personal and public wellness domains. Meanwhile, the emergence of social media platforms and wearable sensors makes it feasible to perform wellness profiling for users from multiple perspectives. However, research efforts on wellness profiling and integration of social media and sensor data are relatively sparse, and this study represents one of the first attempts in this direction. Specifically, to infer personal wellness attributes, we proposed multi-source individual user profile learning framework named “TweetFit”. “TweetFit” can handle data incompleteness and perform wellness attributes inference from sensor and social media data simultaneously. Our experimental results show that the integration of the data from sensors and multiple social media sources can substantially boost the wellness profiling performance.", "title": "" } ]
scidocsrr
a4a87cd46717129b8d9ea63046db2f4e
Survey on Various Gesture Recognition Techniques for Interfacing Machines Based on Ambient Intelligence
[ { "docid": "9d0b7f84d0d326694121a8ba7a3094b4", "text": "Passive sensing of human hand and limb motion is important for a wide range of applications from human-computer interaction to athletic performance measurement. High degree of freedom articulated mechanisms like the human hand are di cult to track because of their large state space and complex image appearance. This article describes a model-based hand tracking system, called DigitEyes, that can recover the state of a 27 DOF hand model from ordinary gray scale images at speeds of up to 10 Hz.", "title": "" } ]
[ { "docid": "d7305a95bb305a00d92ac94b67687f5c", "text": "In the past decade, we have witnessed explosive growth in the number of low-power embedded and Internet-connected devices, reinforcing the new paradigm, Internet of Things (IoT). The low power wide area network (LPWAN), due to its long-range, low-power and low-cost communication capability, is actively considered by academia and industry as the future wireless communication standard for IoT. However, despite the increasing popularity of `mobile IoT', little is known about the suitability of LPWAN for those mobile IoT applications in which nodes have varying degrees of mobility. To fill this knowledge gap, in this paper, we conduct an experimental study to evaluate, analyze, and characterize LPWAN in both indoor and outdoor mobile environments. Our experimental results indicate that the performance of LPWAN is surprisingly susceptible to mobility, even to minor human mobility, and the effect of mobility significantly escalates as the distance to the gateway increases. These results call for development of new mobility-aware LPWAN protocols to support mobile IoT.", "title": "" }, { "docid": "76dd20f0464ff42badc5fd4381eed256", "text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.", "title": "" }, { "docid": "8decac4ff789460595664a38e7527ed6", "text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.", "title": "" }, { "docid": "9c25084d690dcd1a654289f9817105bb", "text": "The authors describe a behavioral theory of the dynamics of insider-threat risks. Drawing on data related to information technology security violations and on a case study created to explain the dynamics observed in that data, the authors constructed a system dynamics model of a theory of the development of insider-threat risks and conducted numerical simulations to explore the parameter and response spaces of the model. By examining several scenarios in which attention to events, increased judging capabilities, better information, and training activities are simulated, the authors theorize about why information technology security effectiveness changes over time. The simulation results argue against the common presumption that increased security comes at the cost of reduced production.", "title": "" }, { "docid": "cd5210231c5fa099be6b858a3069414d", "text": "Fat grafting to the aging face has become an integral component of esthetic surgery. However, the amount of fat to inject to each area of the face is not standardized and has been based mainly on the surgeon’s experience. The purpose of this study was to perform a systematic review of injected fat volume to different facial zones. A systematic review of the literature was performed through a MEDLINE search using keywords “facial,” “fat grafting,” “lipofilling,” “Coleman technique,” “autologous fat transfer,” and “structural fat grafting.” Articles were then sorted by facial subunit and analyzed for: author(s), year of publication, study design, sample size, donor site, fat preparation technique, average and range of volume injected, time to follow-up, percentage of volume retention, and complications. Descriptive statistics were performed. Nineteen articles involving a total of 510 patients were included. Rhytidectomy was the most common procedure performed concurrently with fat injection. The mean volume of fat injected to the forehead is 6.5 mL (range 4.0–10.0 mL); to the glabellar region 1.4 mL (range 1.0–4.0 mL); to the temple 5.9 mL per side (range 2.0–10.0 mL); to the eyebrow 5.5 mL per side; to the upper eyelid 1.7 mL per side (range 1.5–2.5 mL); to the tear trough 0.65 mL per side (range 0.3–1.0 mL); to the infraorbital area (infraorbital rim to lower lid/cheek junction) 1.4 mL per side (range 0.9–3.0 mL); to the midface 1.4 mL per side (range 1.0–4.0 mL); to the nasolabial fold 2.8 mL per side (range 1.0–7.5 mL); to the mandibular area 11.5 mL per side (range 4.0–27.0 mL); and to the chin 6.7 mL (range 1.0–20.0 mL). Data on exactly how much fat to inject to each area of the face in facial fat grafting are currently limited and vary widely based on different methods and anatomical terms used. This review offers the ranges and the averages for the injected volume in each zone. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "4d4bf7b06c88fba54b794921ee67109f", "text": "This article provides surgical pathologists an overview of health information systems (HISs): what they are, what they do, and how such systems relate to the practice of surgical pathology. Much of this article is dedicated to the electronic medical record. Information, in how it is captured, transmitted, and conveyed, drives the effectiveness of such electronic medical record functionalities. So critical is information from pathology in integrated clinical care that surgical pathologists are becoming gatekeepers of not only tissue but also information. Better understanding of HISs can empower surgical pathologists to become stakeholders who have an impact on the future direction of quality integrated clinical care.", "title": "" }, { "docid": "d034e1b08f704c7245a50bb383206001", "text": "Multitask learning, i.e. learning several tasks at once with the same neural network, can improve performance in each of the tasks. Designing deep neural network architectures for multitask learning is a challenge: There are many ways to tie the tasks together, and the design choices matter. The size and complexity of this problem exceeds human design ability, making it a compelling domain for evolutionary optimization. Using the existing state of the art soft ordering architecture as the starting point, methods for evolving the modules of this architecture and for evolving the overall topology or routing between modules are evaluated in this paper. A synergetic approach of evolving custom routings with evolved, shared modules for each task is found to be very powerful, significantly improving the state of the art in the Omniglot multitask, multialphabet character recognition domain. This result demonstrates how evolution can be instrumental in advancing deep neural network and complex system design in general.", "title": "" }, { "docid": "3058eddad0052470b7b74cb6a4142ffa", "text": "With ever-increasing advancements in technology, neuroscientists are able to collect data in greater volumes and with finer resolution. The bottleneck in understanding how the brain works is consequently shifting away from the amount and type of data we can collect and toward what we actually do with the data. There has been a growing interest in leveraging this vast volume of data across levels of analysis, measurement techniques, and experimental paradigms to gain more insight into brain function. Such efforts are visible at an international scale, with the emergence of big data neuroscience initiatives, such as the BRAIN initiative (Bargmann et al., 2014), the Human Brain Project, the Human Connectome Project, and the National Institute of Mental Health's Research Domain Criteria initiative. With these large-scale projects, much thought has been given to data-sharing across groups (Poldrack and Gorgolewski, 2014; Sejnowski et al., 2014); however, even with such data-sharing initiatives, funding mechanisms, and infrastructure, there still exists the challenge of how to cohesively integrate all the data. At multiple stages and levels of neuroscience investigation, machine learning holds great promise as an addition to the arsenal of analysis tools for discovering how the brain works.", "title": "" }, { "docid": "db7bc8bbfd7dd778b2900973f2cfc18d", "text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.", "title": "" }, { "docid": "4dcd3f6b631458707153a4369ccfd269", "text": "Smart grids are electric networks that employ advanced monitoring, control, and communication technologies to deliver reliable and secure energy supply, enhance operation efficiency for generators and distributors, and provide flexible choices for prosumers. Smart grids are a combination of complex physical network systems and cyber systems that face many technological challenges. In this paper, we will first present an overview of these challenges in the context of cyber-physical systems. We will then outline potential contributions that cyber-physical systems can make to smart grids, as well as the challenges that smart grids present to cyber-physical systems. Finally, implications of current technological advances to smart grids are outlined.", "title": "" }, { "docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "6a798f74dab69594c790a088fdec6491", "text": "Clustering and classification of ECG records for four patient classes from the Internet databases by using the Weka system. Patient classes include normal, atrial arrhythmia, supraventricular arrhythmia and CHF. Chaos features are extracted automatically by using the ECG Chaos Extractor platform and recorded in Arff files. The list of features includes: correlation dimension, central tendency measure, spatial filling index and approximate entropy. Both ECG signal files and ECG annotations files are analyzed. The results show that chaos features can successfully cluster and classify the ECG annotations records by using standard and efficient algorithms such as EM and C4.5.", "title": "" }, { "docid": "8474b5b3ed5838e1d038e73579168f40", "text": "For the first time to the best of our knowledge, this paper provides an overview of millimeter-wave (mmWave) 5G antennas for cellular handsets. Practical design considerations and solutions related to the integration of mmWave phased-array antennas with beam switching capabilities are investigated in detail. To experimentally examine the proposed methodologies, two types of mesh-grid phased-array antennas featuring reconfigurable horizontal and vertical polarizations are designed, fabricated, and measured at the 60 GHz spectrum. Afterward the antennas are integrated with the rest of the 60 GHz RF and digital architecture to create integrated mmWave antenna modules and implemented within fully operating cellular handsets under plausible user scenarios. The effectiveness, current limitations, and required future research areas regarding the presented mmWave 5G antenna design technologies are studied using mmWave 5G system benchmarks.", "title": "" }, { "docid": "90bf404069bd3dfff1e6b108dafffe4c", "text": "To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.", "title": "" }, { "docid": "edb99b9884679b54a4db70cfa367ffa5", "text": "Smart cities are nowadays expanding and flourishing worldwide with Internet of Things (IoT), i.e. smart things like sensors and actuators, and mobile devices applications and installations which change the citizens' and authorities' everyday life. Smart cities produce daily huge streams of sensors data while citizens interact with Web and/or mobile devices utilizing social networks. In such a smart city context, new approaches to integrate big data streams from both sensors and social networks are needed to exploit big data production and circulation towards offering innovative solutions and applications. The SmartSantander infrastructure (EU FP7 project) has offered the ground for the SEN2SOC experiment which has integrated sensor and social data streams. This presentation outlines its research and industrial perspective and potential impact.", "title": "" }, { "docid": "17ab4797666afed3a37a8761fcbb0d1e", "text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.", "title": "" }, { "docid": "7105302557aa312e3dedbc7d7cc6e245", "text": "a Canisius College, Richard J. Wehle School of Business, Department of Management and Marketing, 2001 Main Street, Buffalo, NY 14208-1098, United States b Clemson University, College of Business and Behavioral Science, Department of Marketing, 245 Sirrine Hall, Clemson, SC 29634-1325, United States c University of Alabama at Birmingham, School of Business, Department of Marketing, Industrial Distribution and Economics, 1150 10th Avenue South, Birmingham, AL 35294, United States d Vlerick School of Management Reep 1, BE-9000 Ghent Belgium", "title": "" }, { "docid": "57e5d801778711f2ab9a152f08ae53e8", "text": "A modular multilevel converter (MMC) is one of the next-generation multilevel PWM converters intended for high- or medium-voltage power conversion without transformers. The MMC consists of cascade connection of multiple bidirectional PWM chopper-cells and floating dc capacitors per leg, thus requiring voltage-balancing control of their chopper-cells. However, no paper has been discussed explicitly on voltage-balancing control with theoretical and experimental verifications. This paper deals with two types of modular multilevel PWM converters with focus on their circuit configurations and voltage-balancing control. Combination of averaging and balancing controls enables the MMCs to achieve voltage balancing without any external circuit. The viability of the MMCs as well as the effectiveness of the PWM control method is confirmed by simulation and experiment.", "title": "" }, { "docid": "f05b001f03e00bf2d0807eb62d9e2369", "text": "Since the hydraulic actuating suspension system has nonlinear and time-varying behavior, it is difficult to establish an accurate model for designing a model-based controller. Here, an adaptive fuzzy sliding mode controller is proposed to suppress the sprung mass position oscillation due to road surface variation. This intelligent control strategy combines an adaptive rule with fuzzy and sliding mode control algorithms. It has online learning ability to deal with the system time-varying and nonlinear uncertainty behaviors, and adjust the control rules parameters. Only eleven fuzzy rules are required for this active suspension system and these fuzzy control rules can be established and modified continuously by online learning. The experimental results show that this intelligent control algorithm effectively suppresses the oscillation amplitude of the sprung mass with respect to various road surface disturbances.", "title": "" }, { "docid": "abe375d47dc0344467d41f6a0c13f885", "text": "Brain and the gastrointestinal (GI) tract are intimately connected to form a bidirectional neurohumoral communication system. The communication between gut and brain, knows as the gut-brain axis, is so well established that the functional status of gut is always related to the condition of brain. The researches on the gut-brain axis were traditionally focused on the psychological status affecting the function of the GI tract. However, recent evidences showed that gut microbiota communicates with the brain via the gut-brain axis to modulate brain development and behavioral phenotypes. These recent findings on the new role of gut microbiota in the gut-brain axis implicate that gut microbiota could associate with brain functions as well as neurological diseases via the gut-brain axis. To elucidate the role of gut microbiota in the gut-brain axis, precise identification of the composition of microbes constituting gut microbiota is an essential step. However, identification of microbes constituting gut microbiota has been the main technological challenge currently due to massive amount of intestinal microbes and the difficulties in culture of gut microbes. Current methods for identification of microbes constituting gut microbiota are dependent on omics analysis methods by using advanced high tech equipment. Here, we review the association of gut microbiota with the gut-brain axis, including the pros and cons of the current high throughput methods for identification of microbes constituting gut microbiota to elucidate the role of gut microbiota in the gut-brain axis.", "title": "" } ]
scidocsrr
ec8951758ac906219458a6f05a076222
Generation of THz wave with orbital angular momentum by graphene patch reflectarray
[ { "docid": "2943c046bae638a287ddaf72129bee0e", "text": "The use of graphene for fixed-beam reflectarray antennas at Terahertz (THz) is proposed. Graphene's unique electronic band structure leads to a complex surface conductivity at THz frequencies, which allows the propagation of very slow plasmonic modes. This leads to a drastic reduction of the electrical size of the array unit cell and thereby good array performance. The proposed reflectarray has been designed at 1.3 THz and comprises more than 25000 elements of size about λ0/16. The array reflective unit cell is analyzed using a full vectorial approach, taking into account the variation of the angle of incidence and assuming local periodicity. Good performance is obtained in terms of bandwidth, cross-polar, and grating lobes suppression, proving the feasibility of graphene-based reflectarrays and other similar spatially fed structures at Terahertz frequencies. This result is also a first important step toward reconfigurable THz reflectarrays using graphene electric field effect.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "0bb270bfff12141bdc6daeb7415befd0", "text": "Community analysis algorithm proposed by Clauset, Newman, and Moore (CNM algorithm) finds community structure in social networks. Unfortunately, CNM algorithm does not scale well and its use is practically limited to networks whose sizes are up to 500,000 nodes. We show that this inefficiency is caused from merging communities in unbalanced manner and that a simple heuristics that attempts to merge community structures in a balanced manner can dramatically improve community structure analysis. The proposed techniques are tested using data sets obtained from existing social networking service that hosts 5.5 million users. We have tested three three variations of the heuristics. The fastest method processes a SNS friendship network with 1 million users in 5 minutes (70 times faster than CNM) and another friendship network with 4 million users in 35 minutes, respectively. Another one processes a network with 500,000 nodes in 50 minutes (7 times faster than CNM), finds community structures that has improved modularity, and scales to a network with 5.5 million.", "title": "" }, { "docid": "09c808f014ff9b93795a5e040b2ad7de", "text": "The Internet of Things (IoT) concept proposes that everyday objects are globally accessible from the Internet and integrate into new services having a remarkable impact on our society. Opposite to Internet world, things usually belong to resource-challenged environmentswhere energy, data throughput, and computing resources are scarce. Building upon existing standards in the field such as IEEE1451 and ZigBee and rooted in context semantics, this paper proposes CTP (CommunicationThings Protocol) as a protocol specification to allow interoperability among things with different communication standards as well as simplicity and functionality to build IoT systems. Also, this paper proposes the use of the IoT gateway as a fundamental component in IoT architectures to provide seamless connectivity and interoperability among things and connect two different worlds to build the IoT: the Things world and the Internet world. Both CTP and IoT gateway constitute a middleware content-centric architecture presented as the mechanism to achieve a balance between the intrinsic limitations of things in the physical world and what is required from them in the virtual world. Saidmiddleware content-centric architecture is implementedwithin the frame of two European projects targeting smart environments and proving said CTP’s objectives in real scenarios.", "title": "" }, { "docid": "26cd0260e2a460ac5aa96466ff92f748", "text": "Deep Convolutional Neural Networks (CNNs) have demonstrated excellent performance in image classification, but still show room for improvement in object-detection tasks with many categories, in particular for cluttered scenes and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions which with high probability represent objects, where in turn CNNs are deployed for classification. Selective Search represents a family of sophisticated algorithms that are engineered with multiple segmentation, appearance and saliency cues, typically coming with a significant runtime overhead. Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low reproducibility due to unstable superpixels, even for slight image perturbations. Although CNNs are subsequently used for classification in top-performing object-detection pipelines, current proposal methods are agnostic to how these models parse objects and their rich learned representations. As a result they may propose regions which may not resemble high-level objects or totally miss some of them. To overcome these drawbacks we propose a boosting approach which directly takes advantage of hierarchical CNN features for detecting regions of interest fast. We demonstrate its performance on ImageNet 2013 detection benchmark and compare it with state-of-the-art methods. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.", "title": "" }, { "docid": "a3735cc40727de4016ee29f6a29d578f", "text": "By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.", "title": "" }, { "docid": "014759efa636aec38aa35287b61e44a4", "text": "Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection", "title": "" }, { "docid": "9b1a7f811d396e634e9cc5e34a18404e", "text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.", "title": "" }, { "docid": "164a1246119f8e7c230864ac5300da60", "text": "1,2,3,4,5 Department Of Computer Engineering, Smt .Kashibai Navale College of Engineering, Pune. ----------------------------------------------------------------------------***-------------------------------------------------------------------------ABSTRACTIn today’s world social networking platforms such as Instagram, Facebook, Google+ etc, have created the boon in our humanitarian society[1]. Along with these social networking platforms there comes a great responsibility of handling user privacy as well as user data. In most of these websites, data is stored on the centralized system called as the server. [1] The whole system crash down if the server goes down. One of the solutions for this problem is to use a decentralized system. Decentralized applications works on Blockchain. A Blockchain is a group of blocks connected sequentially to each other. The blockchains are designed so that transactions remain immutable i.e. unchanged hence provides security. The data can be distributed and no one can tampered that data. This paper presents a decentralized social media photo sharing web application which is based on blockchain technology where the user would be able to view, like, comment, share photos shared by different users.", "title": "" }, { "docid": "129a42c825850acd12b2f90a0c65f4ea", "text": "Vertical fractures in teeth can present difficulties in diagnosis. There are, however, many specific clinical and radiographical signs which, when present, can alert clinicians to the existence of a fracture. In this review, the diagnosis of vertical root fractures is discussed in detail, and examples are presented of clinical and radiographic signs associated with these fractured teeth. Treatment alternatives are discussed for both posterior and anterior teeth.", "title": "" }, { "docid": "6557347e1c0ebf014842c9ae2c77dbed", "text": "----------------------------------------------------------------------ABSTRACT-------------------------------------------------------------Steganography is derived from the Greek word steganos which literally means “Covered” and graphy means “Writing”, i.e. covered writing. Steganography refers to the science of “invisible” communication. For hiding secret information in various file formats, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. The Least Significant Bit (LSB) embedding technique suggests that data can be hidden in the least significant bits of the cover image and the human eye would be unable to notice the hidden image in the cover file. This technique can be used for hiding images in 24-Bit, 8-Bit, Gray scale format. This paper explains the LSB Embedding technique and Presents the evaluation for various file formats.", "title": "" }, { "docid": "da27ccc6467cd913a7a5124c5e08c6f4", "text": "The aggressive optimization of heavily used kernels is an important problem in high-performance computing. However, both general purpose compilers and highly specialized tools such as superoptimizers often do not have sufficient static knowledge of restrictions on program inputs that could be exploited to produce the very best code. For many applications, the best possible code is conditionally correct: the optimized kernel is equal to the code that it replaces only under certain preconditions on the kernel's inputs. The main technical challenge in producing conditionally correct optimizations is in obtaining non-trivial and useful conditions and proving conditional equivalence formally in the presence of loops. We combine abstract interpretation, decision procedures, and testing to yield a verification strategy that can address both of these problems. This approach yields a superoptimizer for x86 that in our experiments produces binaries that are often multiple times faster than those produced by production compilers.", "title": "" }, { "docid": "b266a1490455f8a1708471bf7069f7e9", "text": "Stevia rebaudiana, a perennial herb from the Asteraceae family, is known to the scientific world for its sweetness and steviol glycosides (SGs). SGs are the secondary metabolites responsible for the sweetness of Stevia. They are synthesized by SG biosynthesis pathway operating in the leaves. Most of the genes encoding the enzymes of this pathway have been cloned and characterized from Stevia. Out of various SGs, stevioside and rebaudioside A are the major metabolites. SGs including stevioside have also been synthesized by enzymes and microbial agents. These are non-mutagenic, non-toxic, antimicrobial, and do not show any remarkable side-effects upon consumption. Stevioside has many medical applications and its role against diabetes is most important. SGs have made Stevia an important part of the medicinal world as well as the food and beverage industry. This article presents an overview on Stevia and the importance of SGs.", "title": "" }, { "docid": "862641bf4c8efa627cd38a1fd5b561dc", "text": "WeChat is the largest acquaintance social networking platform in China, which has about 938 million monthly active user accounts. WeChat Moments, known as Friends Circle, serves social networking functions in which users can view information shared by friends. This paper addresses the problem of analyzing the patterns of cascading behavior in WeChat Moments. We obtain 229021 information cascades from WeChat Moments, in which more than 5 million users are involved during 45 days. We analyze these cascades from four aspects to understand the patterns of cascading behavior in WeChat Moments, including the patterns of diffusion structure, temporal dynamic, spatial dynamic and user behavior. In addition, the correlations between these patterns are examined. Our findings contribute to promoting products, predicting and even regulating public opinion.", "title": "" }, { "docid": "95be4f5132cde3c637c5ee217b5c8405", "text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.", "title": "" }, { "docid": "ba57149e82718bad622df36852906531", "text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "2665314258f4b7f59a55702166f59fcc", "text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.", "title": "" }, { "docid": "b387d7b1f17cdbca1260ef25fe4448bc", "text": "This paper derives the transfer function from error voltage to duty cycle, which captures the quasi-digital behavior of the closed-current loop for pulsewidth modulated (PWM) dc-dc converters operating in continuous-conduction mode (CCM) using peak current-mode (PCM) control, the current-loop gain, the transfer function from control voltage to duty cycle (closed-current loop transfer function), and presents experimental verification. The sample-and-hold effect, or quasi-digital (discrete) behavior in the current loop with constant-frequency PCM in PWM dc-dc converters is described in a manner consistent with the physical behavior of the circuit. Using control theory, a transfer function from the error voltage to the duty cycle that captures the quasi-digital behavior is derived. This transfer function has a pole that can be in either the left-half plane or right-half plane, and captures the sample-and-hold effect accurately, enabling the characterization of the current-loop gain and closed-current loop for PWM dc-dc converters with PCM. The theoretical and experimental response results were in excellent agreement, confirming the validity of the transfer functions derived. The closed-current loop characterization can be used for the design of a controller for the outer voltage loop.", "title": "" }, { "docid": "9bcf45278e391a6ab9a0b33e93d82ea9", "text": "Non-orthogonal multiple access (NOMA) is a potential enabler for the development of 5G and beyond wireless networks. By allowing multiple users to share the same time and frequency, NOMA can scale up the number of served users, increase spectral efficiency, and improve user-fairness compared to existing orthogonal multiple access (OMA) techniques. While single-cell NOMA has drawn significant attention recently, much less attention has been given to multi-cell NOMA. This article discusses the opportunities and challenges of NOMA in a multi-cell environment. As the density of base stations and devices increases, inter-cell interference becomes a major obstacle in multi-cell networks. As such, identifying techniques that combine interference management approaches with NOMA is of great significance. After discussing the theory behind NOMA, this article provides an overview of the current literature and discusses key implementation and research challenges, with an emphasis on multi-cell NOMA.", "title": "" }, { "docid": "38f30f6070b7ca3abca54d50cba88c31", "text": "Dengue virus produces a mild acute febrile illness, dengue fever (DF) and a severe illness, dengue hemorrhagic fever (DHF). The characteristic feature of DHF is increased capillary permeability leading to extensive plasma leakage in serous cavities resulting in shock. The pathogenesis of DHF is not fully understood. This paper presents a cascade of cytokines, that in our view, may lead to DHF. The main feature is the early generation of a unique cytokine, human cytotoxic factor (hCF) that initiates a series of events leading to a shift from Th1-type response in mild illness to a Th2-type response resulting in severe DHF. The shift from Th1 to Th2 is regulated by the relative levels of interferon-gamma and interleukin (IL)-10 and between IL-12 and transforming growth factor-beta, which showed an inverse relationship in patients with DF.", "title": "" }, { "docid": "d9605c1cde4c40d69c2faaea15eb466c", "text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.", "title": "" } ]
scidocsrr
c03596679e018c5f34254a773da1524f
Security and privacy for storage and computation in cloud computing
[ { "docid": "97fee760308f95398b6717a091a977d2", "text": "We introduce and formalize the notion of Verifiable Computation , which enables a computationally weak client to “outsource” the computation of a functio n F on various dynamically-chosen inputs x1, ...,xk to one or more workers. The workers return the result of the fu nction evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi . The primary constraint is that the verification of the proof should requi re substantially less computational effort than computingF(xi) from scratch. We present a protocol that allows the worker to return a compu tationally-sound, non-interactive proof that can be verified inO(m· poly(λ)) time, wherem is the bit-length of the output of F , andλ is a security parameter. The protocol requires a one-time pr e-processing stage by the client which takes O(|C| · poly(λ)) time, whereC is the smallest known Boolean circuit computing F . Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about t hexi or yi values.", "title": "" } ]
[ { "docid": "590a44ab149b88e536e67622515fdd08", "text": "Chitosan is considered to be one of the most promising and applicable materials in adsorption applications. The existence of amino and hydroxyl groups in its molecules contributes to many possible adsorption interactions between chitosan and pollutants (dyes, metals, ions, phenols, pharmaceuticals/drugs, pesticides, herbicides, etc.). These functional groups can help in establishing positions for modification. Based on the learning from previously published works in literature, researchers have achieved a modification of chitosan with a number of different functional groups. This work summarizes the published works of the last three years (2012-2014) regarding the modification reactions of chitosans (grafting, cross-linking, etc.) and their application to adsorption of different environmental pollutants (in liquid-phase).", "title": "" }, { "docid": "1fa2b4aa557c0efef7a53717dbe0c3fe", "text": "Many birds use grounded running (running without aerial phases) in a wide range of speeds. Contrary to walking and running, numerical investigations of this gait based on the BSLIP (bipedal spring loaded inverted pendulum) template are rare. To obtain template related parameters of quails (e.g. leg stiffness) we used x-ray cinematography combined with ground reaction force measurements of quail grounded running. Interestingly, with speed the quails did not adjust the swing leg's angle of attack with respect to the ground but adapted the angle between legs (which we termed aperture angle), and fixed it about 30ms before touchdown. In simulations with the BSLIP we compared this swing leg alignment policy with the fixed angle of attack with respect to the ground typically used in the literature. We found symmetric periodic grounded running in a simply connected subset comprising one third of the investigated parameter space. The fixed aperture angle strategy revealed improved local stability and surprising tolerance with respect to large perturbations. Starting with the periodic solutions, after step-down step-up or step-up step-down perturbations of 10% leg rest length, in the vast majority of cases the bipedal SLIP could accomplish at least 50 steps to fall. The fixed angle of attack strategy was not feasible. We propose that, in small animals in particular, grounded running may be a common gait that allows highly compliant systems to exploit energy storage without the necessity of quick changes in the locomotor program when facing perturbations.", "title": "" }, { "docid": "5fabe23b0eccc0c8cf752db44e2f7085", "text": "This article presents new evidence from English that the theory of grammar makes a distinction between the contrastive focus and discourse-new status of constituents. The evidence comes from a phonetic investigation which compares the prosody of all-new sentences with the prosody of sentences combining contrastive focus and discourse-new constituents. We have found that while the sentences of these different types in our experimental materials are not distinguished in their patterns of distribution of pitch accents and phonological phrase organization, they do differ in patterns of phonetic prominence—duration, pitch and intensity, which vary according to the composition of the sentence in terms of contrastive and/or new constituents. The central new finding is that contrastive focus constituents are more phonetically prominent than discourse new constituents that are contained within the same sentence. These distinctions in phonetic prominence are plausibly the consequence of distinctions in the phonological representation of phrasal prosodic prominence (stress) for contrastive focus and discourse-new constituents in English.", "title": "" }, { "docid": "9b10757ca3ca84784033c20f064078b7", "text": "Snafu, or Snake Functions, is a modular system to host, execute and manage language-level functions offered as stateless (micro-)services to diverse external triggers. The system interfaces resemble those of commercial FaaS providers but its implementation provides distinct features which make it overall useful to research on FaaS and prototyping of FaaSbased applications. This paper argues about the system motivation in the presence of already existing alternatives, its design and architecture, the open source implementation and collected metrics which characterise the system.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "a56a95db6d9d0f0ccf26192b7e2322ff", "text": "CRISPR-Cas9 is a versatile genome editing technology for studying the functions of genetic elements. To broadly enable the application of Cas9 in vivo, we established a Cre-dependent Cas9 knockin mouse. We demonstrated in vivo as well as ex vivo genome editing using adeno-associated virus (AAV)-, lentivirus-, or particle-mediated delivery of guide RNA in neurons, immune cells, and endothelial cells. Using these mice, we simultaneously modeled the dynamics of KRAS, p53, and LKB1, the top three significantly mutated genes in lung adenocarcinoma. Delivery of a single AAV vector in the lung generated loss-of-function mutations in p53 and Lkb1, as well as homology-directed repair-mediated Kras(G12D) mutations, leading to macroscopic tumors of adenocarcinoma pathology. Together, these results suggest that Cas9 mice empower a wide range of biological and disease modeling applications.", "title": "" }, { "docid": "e27da58188be54b71187d3489fa6b4e7", "text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.", "title": "" }, { "docid": "2650ec74eb9b8c368f213212218989ea", "text": "Illumina-based next generation sequencing (NGS) has accelerated biomedical discovery through its ability to generate thousands of gigabases of sequencing output per run at a fraction of the time and cost of conventional technologies. The process typically involves four basic steps: library preparation, cluster generation, sequencing, and data analysis. In 2015, a new chemistry of cluster generation was introduced in the newer Illumina machines (HiSeq 3000/4000/X Ten) called exclusion amplification (ExAmp), which was a fundamental shift from the earlier method of random cluster generation by bridge amplification on a non-patterned flow cell. The ExAmp peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/125724 doi: bioRxiv preprint first posted online Apr. 9, 2017;", "title": "" }, { "docid": "a8de67cc99337dd8cdb92e1d6859f211", "text": "We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP). We enrich Saul with components that are necessary for a broad range of learning based Natural Language Processing tasks at various levels of granularity. We illustrate these advances using three different, well-known NLP problems, and show how these generic learning and inference modules can directly exploit Saul’s graph-based data representation. These properties allow the programmer to easily switch between different model formulations and configurations, and consider various kinds of dependencies and correlations among variables of interest with minimal programming effort. We argue that Saul provides an extremely useful paradigm both for the design of advanced NLP systems and for supporting advanced research in NLP.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "42fcc24e20ad15de00eb1f93add8b827", "text": "Although scientometrics is seeing increasing use in Information Systems (IS) research, in particular for evaluating research efforts and measuring scholarly influence; historically, scientometric IS studies are focused primarily on ranking authors, journals, or institutions. Notwithstanding the usefulness of ranking studies for evaluating the productivity of the IS field’s formal communication channels and its scholars, the IS field has yet to exploit the full potential that scientometrics offers, especially towards its progress as a discipline. This study makes a contribution by raising the discourse surrounding the value of scientometric research in IS, and proposes a framework that uncovers the multi-dimensional bases for citation behaviour and its epistemological implications on the creation, transfer, and growth of IS knowledge. Having identified 112 empirical research evaluation studies in IS, we select 44 substantive scientometric IS studies for in-depth content analysis. The findings from this review allow us to map an engaging future in scientometric research, especially towards enhancing the IS field’s conceptual and theoretical development. Journal of Information Technology advance online publication, 12 January 2016; doi:10.1057/jit.2015.29", "title": "" }, { "docid": "2967df08ad0b9987ce2d6cb6006d3e69", "text": "As a crucial security problem, anti-spoofing in biometrics, and particularly for the face modality, has achieved great progress in the recent years. Still, new threats arrive inform of better, more realistic and more sophisticated spoofing attacks. The objective of the 2nd Competition on Counter Measures to 2D Face Spoofing Attacks is to challenge researchers to create counter measures effectively detecting a variety of attacks. The submitted propositions are evaluated on the Replay-Attack database and the achieved results are presented in this paper.", "title": "" }, { "docid": "ad24de7b81fec45126c756b41e39822b", "text": "University teachers provided first year Arts students with hundreds of cinematic images online to analyse as a key part of their predominantly face-to-face undergraduate course. This qualitative study investigates the extent to which the groups engaged in learning involving their analysis of the images and how this was related to their perception of the ICT-mediated environment. Interviews and questionnaires completed by students revealed that the extent of engaged learning was related to the quality of the approach to groupwork reported by the students, the quality of their approach to the analysis of the images and their perceptions of key aspects of the online environment which provided the images. The findings have implications for the design and approach to teaching best suited for students involved in groupwork and the use of ICT resources provided to promote engaged experiences of learning.", "title": "" }, { "docid": "265a709088f671ba484ffba937ae2977", "text": "We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyth's gamut-mapping method, Cardei et al.'s neural net method, and Finlayson et al.'s Color by Correlation method. We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study are available online at http://www.cs.sfu.ca/(tilde)color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/(tilde)color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically color by correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.", "title": "" }, { "docid": "b44df1268804e966734ea404b8c29360", "text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.", "title": "" }, { "docid": "37b8114afeba61ac1e381405f2503ced", "text": "Measurements of the phases of free jet waves relative to an acoustic excitation, and of the pattern and time phase of the sound pressure produced by the same jet impinging on an edge, provide a consistent model for Stage I frequencies of edge tones and of an organ pipe with identical geometry. Both systems are explained entirely in terms of volume displacement of air by the jet. During edge-tone oscillation, 180 ø of phase delay occur on the jet. Peak positive acoustic pressure on a given side of the edge occurs at the instant the jet profile crosses the edge and starts into that side. For the pipe, additional phase shifts occur that depend on the driving points for the jet current, the Q of the pipe, and the frequency of oscillation. Introduction of this additional phase shift yields an accurate prediction of the frequencies of a blown pipe and the blowing pressure at which mode jumps will occur.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "38e2848daec38de283341bd3055915c9", "text": "IoT devices are becoming increasingly intelligent and context-aware. Sound is an attractive sensory modality because it is information-rich but not as computationally demanding as alternatives such as vision. New applications of ultra-low power (ULP), ‘always-on’ intelligent acoustic sensing includes agricultural monitoring to detect pests or precipitation, infrastructure health tracking to recognize acoustic symptoms, and security/safety monitoring to identify dangerous conditions. A major impediment for the adoption of always-on, context-aware sensing is power consumption, particularly for ultra-small IoT devices requiring long-term operation without battery replacement. To sustain operation with a 1mm2 solar cell in ambient light (100lux) or achieve a lifetime of 10 years using a button cell battery (2mAh), <20nW power consumption must be achieved, which is more than 2 orders of magnitude lower than current state-of-the-art acoustic sensing systems [1,2]. More broadly a previous ULP signal acquisition IC [3] consumes just 3nW while 64nW ECG monitoring system [4] includes back-end classification, however there are no sub-20nW complete sensing systems with both analog frontend and digital backend.", "title": "" }, { "docid": "b1b6e670f21479956d2bbe281c6ff556", "text": "Near real-time data from the MODIS satellite sensor was used to detect and trace a harmful algal bloom (HAB), or red tide, in SW Florida coastal waters from October to December 2004. MODIS fluorescence line height (FLH in W m 2 Am 1 sr ) data showed the highest correlation with near-concurrent in situ chlorophyll-a concentration (Chl in mg m ). For Chl ranging between 0.4 to 4 mg m 3 the ratio between MODIS FLH and in situ Chl is about 0.1 W m 2 Am 1 sr 1 per mg m 3 chlorophyll (Chl=1.255 (FLH 10), r =0.92, n =77). In contrast, the band-ratio chlorophyll product of either MODIS or SeaWiFS in this complex coastal environment provided false information. Errors in the satellite Chl data can be both negative and positive (3–15 times higher than in situ Chl) and these data are often inconsistent either spatially or temporally, due to interferences of other water constituents. The red tide that formed from November to December 2004 off SW Florida was revealed by MODIS FLH imagery, and was confirmed by field sampling to contain medium (10 to 10 cells L ) to high (>10 cells L ) concentrations of the toxic dinoflagellate Karenia brevis. The FLH imagery also showed that the bloom started in midOctober south of Charlotte Harbor, and that it developed and moved to the south and southwest in the subsequent weeks. Despite some artifacts in the data and uncertainty caused by factors such as unknown fluorescence efficiency, our results show that the MODIS FLH data provide an unprecedented tool for research and managers to study and monitor algal blooms in coastal environments. D 2005 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
b72babe4bd9f883b21d78ed3b85770e2
FedX: Optimization Techniques for Federated Query Processing on Linked Data
[ { "docid": "a9b159f9048c1dadb941e1462ba5826f", "text": "Distributed data processing is becoming a reality. Businesses want to do it for many reasons, and they often must do it in order to stay competitive. While much of the infrastructure for distributed data processing is already there (e.g., modern network technology), a number of issues make distributed data processing still a complex undertaking: (1) distributed systems can become very large, involving thousands of heterogeneous sites including PCs and mainframe server machines; (2) the state of a distributed system changes rapidly because the load of sites varies over time and new sites are added to the system; (3) legacy systems need to be integrated—such legacy systems usually have not been designed for distributed data processing and now need to interact with other (modern) systems in a distributed environment. This paper presents the state of the art of query processing for distributed database and information systems. The paper presents the “textbook” architecture for distributed query processing and a series of techniques that are particularly useful for distributed database systems. These techniques include special join techniques, techniques to exploit intraquery paralleli sm, techniques to reduce communication costs, and techniques to exploit caching and replication of data. Furthermore, the paper discusses different kinds of distributed systems such as client-server, middleware (multitier), and heterogeneous database systems, and shows how query processing works in these systems.", "title": "" }, { "docid": "9de44948e28892190f461199a1d33935", "text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data", "title": "" } ]
[ { "docid": "ca990b1b43ca024366a2fe73e2a21dae", "text": "Guanabenz (2,6-dichlorobenzylidene-amino-guanidine) is a centrally acting antihypertensive drug whose mechanism of action is via alpha2 adrenoceptors or, more likely, imidazoline receptors. Guanabenz is marketed as an antihypertensive agent in human medicine (Wytensin tablets, Wyeth Pharmaceuticals). Guanabenz has reportedly been administered to racing horses and is classified by the Association of Racing Commissioners International as a class 3 foreign substance. As such, its identification in a postrace sample may result in significant sanctions against the trainer of the horse. The present study examined liquid chromatographic/tandem quadrupole mass spectrometric (LC-MS/MS) detection of guanabenz in serum samples from horses treated with guanabenz by rapid i.v. injection at 0.04 and 0.2 mg/kg. Using a method adapted from previous work with clenbuterol, the parent compound was detected in serum with an apparent limit of detection of approximately 0.03 ng/ml and the limit of quantitation was 0.2 ng/ml. Serum concentrations of guanabenz peaked at approximately 100 ng/ml after the 0.2 mg/kg dose, and the parent compound was detected for up to 8 hours after the 0.04 mg/kg dose. Urine samples tested after administration of guanabenz at these dosages yielded evidence of at least one glucuronide metabolite, with the glucuronide ring apparently linked to a ring hydroxyl group or a guanidinium hydroxylamine. The LC-MS/MS results presented here form the basis of a confirmatory test for guanabenz in racing horses.", "title": "" }, { "docid": "4cf05216efd9f075024d4a3e63cdd511", "text": "BACKGROUND\nSecondary failure of oral hypoglycemic agents is common in patients with type 2 diabetes mellitus (T2DM); thus, patients often need insulin therapy. The most common complication of insulin treatment is lipohypertrophy (LH).\n\n\nOBJECTIVES\nThis study was conducted to estimate the prevalence of LH among insulin-treated patients with Patients with T2DM, to identify the risk factors for the development of LH, and to examine the association between LH and glycemic control.\n\n\nPATIENTS AND METHODS\nA total of 1090 patients with T2DM aged 20 to 89 years, who attended the diabetes clinics at the National Center for Diabetes, Endocrinology, and Genetics (NCDEG, Amman, Jordan) between October 2011 and January 2012, were enrolled. The presence of LH was examined by inspection and palpation of insulin injection sites at the time of the visit as relevant clinical and laboratory data were obtained. The LH was defined as a local tumor-like swelling of subcutaneous fatty tissue at the site of repeated insulin injections.\n\n\nRESULTS\nThe overall prevalence of LH was 37.3% (27.4% grade 1, 9.7% grade 2, and 0.2% grade 3). The LH was significantly associated with the duration of diabetes, needle length, duration of insulin therapy, lack of systematic rotation of insulin injection sites, and poor glycemic control.\n\n\nCONCLUSIONS\nThe LH is a common problem in insulin-treated Jordanian patients with T2DM. More efforts are needed to educate patients and health workers on simple interventions such as using shorter needles and frequent rotation of the insulin injection sites to avoid LH and improve glycemic control.", "title": "" }, { "docid": "d1cde8ce9934723224ecf21c3cab6615", "text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.", "title": "" }, { "docid": "4627d8e86bec798979962847523cc7e0", "text": "Consuming news over online media has witnessed rapid growth in recent years, especially with the increasing popularity of social media. However, the ease and speed with which users can access and share information online facilitated the dissemination of false or unverified information. One way of assessing the credibility of online news stories is by examining the attached images. These images could be fake, manipulated or not belonging to the context of the accompanying news story. Previous attempts to news verification provided the user with a set of related images for manual inspection. In this work, we present a semi-automatic approach to assist news-consumers in instantaneously assessing the credibility of information in hypertext news articles by means of meta-data and feature analysis of images in the articles. In the first phase, we use a hybrid approach including image and text clustering techniques for checking the authenticity of an image. In the second phase, we use a hierarchical feature analysis technique for checking the alteration in an image, where different sets of features, such as edges and SURF, are used. In contrast to recently reported manual news verification, our presented work shows a quantitative measurement on a custom dataset. Results revealed an accuracy of 72.7% for checking the authenticity of attached images with a dataset of 55 articles. Finding alterations in images resulted in an accuracy of 88% for a dataset of 50 images.", "title": "" }, { "docid": "54368ada8cc316af20995b5096764bd1", "text": "Effective managing and sharing of knowledge has the power to improve individual’s lives and society. However, research has shown that people are reluctant to share. Knowledge sharing (KS) involve not only our knowledge, but a process of giving and receiving of knowledge with others. Knowledge sharing capabilities (KSC) is an individual’s capability to share experience, expertise and know-how with other employees in the organization. Previous studies identified many factors affecting KSC either in public or private sectors. Upon a critical review on factors affecting KS and factors affecting KSC, this paper attempts to examine the factors that have been cited as significant in influencing employees KSC within Electronic Government (EG) agencies in Malaysia. Two capable factors that are considered in this study are technical factor and non-technical factor. This paper proposes an integrated conceptual framework of employees KSC which can be used for research enhancement.", "title": "" }, { "docid": "e55b84112fdb179faa8affbf9fed8c72", "text": "A polynomial threshold function (PTF) of degree <i>d</i> is a boolean function of the form <i>f</i>=<i>sgn</i>(<i>p</i>), where <i>p</i> is a degree-<i>d</i> polynomial, and <i>sgn</i> is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function <i>g</i> is called δ-close to constant if, for some <i>v</i>∈{1,−1}, we have <i>g</i>(<i>x</i>)=<i>v</i> for all but at most δ fraction of inputs. We show for every PTF <i>f</i> of degree <i>d</i>≥ 1, and parameters 0<δ, <i>r</i>≤ 1/16, that \n<table class=\"display dcenter\"><tr style=\"vertical-align:middle\"><td class=\"dcell\"><i>Pr</i><sub>ρ∼ <i>R</i><sub><i>r</i></sub></sub> [<i>f</i><sub>ρ</sub> is not  δ -close to constant] ≤ </td><td class=\"dcell\">√</td><td class=\"dcell\"><table style=\"border:0;border-spacing:1;border-collapse:separate;\" class=\"cellpadding0\"><tr><td class=\"hbar\"></td></tr><tr><td style=\"text-align:center;white-space:nowrap\" ><i>r</i></td></tr></table></td><td class=\"dcell\">· (log<i>r</i><sup>−1</sup> · logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>,  </td></tr></table> where ρ∼ <i>R</i><sub><i>r</i></sub> is a random restriction leaving each variable, independently, free with probability <i>r</i>, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random <em>block</em> restrictions: given an arbitrary partitioning of input variables into <i>m</i> blocks, a random block restriction picks a uniformly random block ℓ∈ [<i>m</i>] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF <i>f</i> of degree <i>d</i> becomes δ-close to constant when hit with a random block restriction, except with probability at most <i>m</i><sup>−1/2</sup> · (log<i>m</i>· logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>. As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ <i>d</i>≪ √log<i>n</i>/loglog<i>n</i>, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (<em>STOC</em>, 2016) and Chen, Santhanam, and Srinivasan (<em>CCC</em>, 2016). In particular, we show that there is an <i>n</i>-variate boolean function <i>F</i><sub><i>n</i></sub> ∈ <i>P</i> such that every depth-2 circuit with PTF gates of degree <i>d</i>≥ 1 that computes <i>F</i><sub><i>n</i></sub> must have at least (<i>n</i><sup>3/2+1/<i>d</i></sup>)· (log<i>n</i>)<sup>−<i>O</i>(<i>d</i><sup>2</sup>)</sup> wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-<i>d</i> PTFs due to Kane (<em>Computational Complexity</em>, 2014), and the Littlewood-Offord type anticoncentration bound for degree-<i>d</i> multilinear polynomials due to Meka, Nguyen, and Vu (<em>Theory of Computing</em>, 2016). Finally, we give <em>derandomized</em> versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (<em>SICOMP</em>, 2013).", "title": "" }, { "docid": "83c81ecb870e84d4e8ab490da6caeae2", "text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.", "title": "" }, { "docid": "4c542a4b5a948a037a4c49bce238d04a", "text": "Agar-based nanocomposite films with different types of nanoclays, such as Cloisite Na+, Cloisite 30B, and Cloisite 20A, were prepared using a solvent casting method, and their tensile, water vapor barrier, and antimicrobial properties were tested. Tensile strength (TS), elongation at break (E), and water vapor permeability (WVP) of control agar film were 29.7±1.7 MPa, 45.3±9.6%, and (2.22±0.19)×10(-9) g·m/m2·s·Pa, respectively. All the film properties tested, including transmittance, tensile properties, WVP, and X-ray diffraction patterns, indicated that Cloisite Na+ was the most compatible with agar matrix. TS of the nanocomposite films prepared with 5% Cloisite Na+ increased by 18%, while WVP of the nanocomposite films decreased by 24% through nanoclay compounding. Among the agar/clay nanocomposite films tested, only agar/Cloisite 30B nanocomposite film showed a bacteriostatic function against Listeria monocytogenes.", "title": "" }, { "docid": "13abacabef42365ac61be64597698f78", "text": "Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. As it can be edited by anyone, entries frequently get vandalized, leading to the possibility that it might spread of falsified information if such posts are not detected. The WSDM 2017 Wiki Vandalism Detection Challenge requires us to solve this problem by computing a vandalism score denoting the likelihood that a revision corresponds to an act of vandalism and performance is measured using the ROC-AUC obtained on a held-out test set. This paper provides the details of our submission that obtained an ROC-AUC score of 0.91976 in the final evaluation.", "title": "" }, { "docid": "94fc516df0c0a5f0ebaf671befe10982", "text": "In this paper, an 8th-order cavity filter with two symmetrical transmission zeros in stopband is designedwith the method of generalized Chebyshev synthesis so as to satisfy the IMT-Advanced system demands. To shorten the development cycle of the filter from two or three days to several hours, a co-simulation with Ansoft HFSS and Designer is presented. The effectiveness of the co-simulation method is validated by the excellent consistency between the simulation and the experiment results.", "title": "" }, { "docid": "c3e46c3317d81b2d8b8c53f7e5cd37b9", "text": "A novel rainfall prediction method has been proposed. In the present work rainfall prediction in Southern part of West Bengal (India) has been conducted. A two-step method has been employed. Greedy forward selection algorithm is used to reduce the feature set and to find the most promising features for rainfall prediction. First, in the training phase the data is clustered by applying k-means algorithm, then for each cluster a separate Neural Network (NN) is trained. The proposed two step prediction model (Hybrid Neural Network or HNN) has been compared with MLP-FFN classifier in terms of several statistical performance measuring metrics. The data for experimental purpose is collected by Dumdum meteorological station (West Bengal, India) over the period from 1989 to 1995. The experimental results have suggested a reasonable improvement over traditional methods in predicting rainfall. The proposed HNN model outperformed the compared models by achieving 84.26% accuracy without feature selection and 89.54% accuracy with feature selection.", "title": "" }, { "docid": "e84174b539588b969f7d2230063b30c4", "text": "STUDY DESIGN\nThis was a biomechanical push-out testing study using a porcine model.\n\n\nOBJECTIVE\nThe purpose was to evaluate the strength of implant-bone interface of a porous titanium scaffold by comparing it to polyetheretherketone (PEEK) and allograft.\n\n\nSUMMARY OF BACKGROUND DATA\nOsseointegration is important for achieving maximal stability of spinal fusion implants and it is desirable to achieve as quickly as possible. Common PEEK interbody fusion implants appear to have limited osseointegration potential because of the formation of fibrous tissue along the implant-bone interface. Porous, three-dimensional titanium materials may be an option to enhance osseointegration.\n\n\nMETHODS\nUsing the skulls of two swine, in the region of the os frontale, 16 identical holes (4 mm diameter) were drilled to 10 mm depth in each skull. Porous titanium, PEEK, and allograft pins were press fit into the holes. After 5 weeks, animals were euthanized and the skull sections with the implants were cut into sections with each pin centered within a section. Push-out testing was performed using an MTS machine with a push rate of 6 mm/min. Load-deformation curves were used to compute the extrinsic material properties of the bone samples. Maximum force (N) and shear strength (MPa) were extracted from the output to record the bonding strength between the implant and surrounding bone. When calculating shear strength, maximum force was normalized by the actual implant surface area in contact with surrounding bone.\n\n\nRESULTS\nMean push-out shear strength was significantly greater in the porous titanium scaffold group than in the PEEK or allograft groups (10.2 vs. 1.5 vs. 3.1 MPa, respectively; P < 0.05).\n\n\nCONCLUSION\nThe push-out strength was significantly greater for the implants with porous titanium coating compared with the PEEK or allograft. These results suggest that the material has promise for facilitating osseointegration for implants, including interbody devices for spinal fusion.\n\n\nLEVEL OF EVIDENCE\nN/A.", "title": "" }, { "docid": "1a747f8474841b6b99184487994ad6a2", "text": "This paper discusses the effects of multivariate correlation analysis on the DDoS detection and proposes an example, a covariance analysis model for detecting SYN flooding attacks. The simulation results show that this method is highly accurate in detecting malicious network traffic in DDoS attacks of different intensities. This method can effectively differentiate between normal and attack traffic. Indeed, this method can detect even very subtle attacks only slightly different from the normal behaviors. The linear complexity of the method makes its real time detection practical. The covariance model in this paper to some extent verifies the effectiveness of multivariate correlation analysis for DDoS detection. Some open issues still exist in this model for further research.", "title": "" }, { "docid": "b5b8553b1f50a48af88f9902eab74254", "text": "In this paper we introduce the Fourier tag, a synthetic fiducial marker used to visually encode information and provide controllable positioning. The Fourier tag is a synthetic target akin to a bar-code that specifies multi-bit information which can be efficiently and robustly detected in an image. Moreover, the Fourier tag has the beneficial property that the bit string it encodes has variable length as a function of the distance between the camera and the target. This follows from the fact that the effective resolution decreases as an effect of perspective. This paper introduces the Fourier tag, describes its design, and illustrates its properties experimentally.", "title": "" }, { "docid": "22c72f94040cd65dde8e00a7221d2432", "text": "Research on “How to create a fair, convenient attendance management system”, is being pursued by academics and government departments fervently. This study is based on the biometric recognition technology. The hand geometry machine captures the personal hand geometry data as the biometric code and applies this data in the attendance management system as the attendance record. The attendance records that use this technology is difficult to replicate by others. It can improve the reliability of the attendance records and avoid fraudulent issues that happen when you use a register. This research uses the social survey method-questionnaire to evaluate the theory and practice of introducing biometric recognition technology-hand geometry capturing into the attendance management system.", "title": "" }, { "docid": "fba7801d0b187a9a5fbb00c9d4690944", "text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.", "title": "" }, { "docid": "17bf75156f1ffe0daffd3dbc5dec5eb9", "text": "Celebrities are admired, appreciated and imitated all over the world. As a natural result of this, today many brands choose to work with celebrities for their advertisements. It can be said that the more the brands include celebrities in their marketing communication strategies, the tougher the competition in this field becomes and they allocate a large portion of their marketing budget to this. Brands invest in celebrities who will represent them in order to build the image they want to create. This study aimed to bring under spotlight the perceptions of Turkish customers regarding the use of celebrities in advertisements and marketing communication and try to understand their possible effects on subsequent purchasing decisions. In addition, consumers’ reactions and perceptions were investigated in the context of the product-celebrity match, to what extent the celebrity conforms to the concept of the advertisement and the celebrity-target audience match. In order to achieve this purpose, a quantitative research was conducted as a case study concerning Mavi Jeans (textile company). Information was obtained through survey. The results from this case study are supported by relevant theories concerning the main subject. The most valuable result would be that instead of creating an advertisement around a celebrity in demand at the time, using a celebrity that fits the concept of the advertisement and feeds the concept rather than replaces it, that is celebrity endorsement, will lead to more striking and positive results. Keywords—Celebrity endorsement, product-celebrity match, advertising.", "title": "" }, { "docid": "7adbcbcf5d458087d6f261d060e6c12b", "text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.", "title": "" }, { "docid": "7d7db3f70ba6bcb5f9bf615bd8110eba", "text": "Freshwater and energy are essential commodities for well being of mankind. Due to increasing population growth on the one hand, and rapid industrialization on the other, today’s world is facing unprecedented challenge of meeting the current needs for these two commodities as well as ensuring the needs of future generations. One approach to this global crisis of water and energy supply is to utilize renewable energy sources to produce freshwater from impaired water sources by desalination. Sustainable practices and innovative desalination technologies for water reuse and energy recovery (staging, waste heat utilization, hybridization) have the potential to reduce the stress on the existing water and energy sources with a minimal impact to the environment. This paper discusses existing and emerging desalination technologies and possible combinations of renewable energy sources to drive them and associated desalination costs. It is suggested that a holistic approach of coupling renewable energy sources with technologies for recovery, reuse, and recycle of both energy and water can be a sustainable and environment friendly approach to meet the world’s energy and water needs. High capital costs for renewable energy sources for small-scale applications suggest that a hybrid energy source comprising both grid-powered energy and renewable energy will reduce the desalination costs considering present economics of energy. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
7b8f525c5d3cce9138a472cadfa0403a
Automatic Generation of Raven's Progressive Matrices
[ { "docid": "4b0eec16de82592d1f7c715ad25905a9", "text": "We present a computational model for solving Raven’s Progressive Matrices. This model combines qualitative spatial representations with analogical comparison via structuremapping. All representations are automatically computed by the model. We show that it achieves a level of performance on the Standard Progressive Matrices that is above that of most adults, and that the problems it fails on are also the hardest for people.", "title": "" } ]
[ { "docid": "fb7f079d104e81db41b01afe67cdf3b0", "text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.", "title": "" }, { "docid": "840919760f5cc4839fe027d3a744dbd3", "text": "This paper deals with the development and implementation of an on-line stator resistance and permanent magnet flux linkage identification approach devoted to three-phase and open-end winding permanent magnet synchronous motor drives. In particular, the stator resistance and the permanent magnet flux linkage variations are independently determined by exploiting a current vector control strategy, in which one of the phase currents is continuously maintained to zero while the others are suitably modified in order to establish the same rotating magnetomotive force. Moreover, other motor parameters can be evaluated after re-establishing the normal operation of the drive, under the same operating conditions. As will be demonstrated, neither additional sensors nor special tests are required in the proposed method; Motor electrical parameters can be “on-line” estimated in a wide operating range, avoiding any detrimental impact on the torque capability of the PMSM drive.", "title": "" }, { "docid": "d922dbcdd2fb86e7582a4fb78990990e", "text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.", "title": "" }, { "docid": "8ab53b0100ce36ace61660c9c8e208b4", "text": "A novel current-pumped battery charger (CPBC) is proposed in this paper to increase the Li-ion battery charging performance. A complete charging process, consisting of three subprocesses, namely: 1) the bulk current charging process; 2) the pulsed current charging process; and 3) the pulsed float charging process, can be automatically implemented by using the inherent characteristics of current-pumped phase-locked loop (CPLL). A design example for a 700-mA ldr h Li-ion battery is built to assess the CPBC's performance. In comparison with the conventional phase-locked battery charger, the battery available capacity and charging efficiency of the proposed CPBC are improved by about 6.9% and 1.5%, respectively. The results of the experiment show that a CPLL is really suitable for carrying out a Li-ion battery pulse charger.", "title": "" }, { "docid": "52786e9ad3d055a83cae13f422aefcdd", "text": "The lack of reliable sensory feedback has been one of the barriers in prosthetic hand development. Restoring sensory function from prosthetic hand to amputee remains a great challenge to neural engineering. In this paper, we present the development of a sensory feedback system based on the phenomenon of evoked tactile sensation (ETS) at the stump skin of residual limb induced by transcutaneous electrical nerve stimulation (TENS). The system could map a dynamic pattern of stimuli to an electrode placed on the corresponding projected finger areas on the stump skin. A pressure transducer placed at the tip of prosthetic fingers was used to sense contact pressure, and a high performance DSP processor sampled pressure signals, and calculated the amplitude of feedback stimulation in real-time. Biphasic and charge-balanced current pulses with amplitude modulation generated by a multi-channel laboratory stimulator were delivered to activate sensory nerves beneath the skin. We tested this sensory feedback system in amputee subjects. Preliminary results showed that the subjects could perceive different levels of pressure at the tip of prosthetic finger through evoked tactile sensation (ETS) with distinct grades and modalities. We demonstrated the feasibility to restore the perceptual sensation from prosthetic fingers to amputee based on the phenomenon of evoked tactile sensation (ETS) with TENS.", "title": "" }, { "docid": "3ff13bb873dd9a8deada0a7837c5eca4", "text": "This work investigates the use of deep fully convolutional neural networks (DFCNN) for pixel-wise scene labeling of Earth Observation images. Especially, we train a variant of the SegNet architecture on remote sensing data over an urban area and study different strategies for performing accurate semantic segmentation. Our contributions are the following: 1) we transfer efficiently a DFCNN from generic everyday images to remote sensing images; 2) we introduce a multi-kernel convolutional layer for fast aggregation of predictions at multiple scales; 3) we perform data fusion from heterogeneous sensors (optical and laser) using residual correction. Our framework improves state-of-the-art accuracy on the ISPRS Vaihingen 2D Semantic Labeling dataset.", "title": "" }, { "docid": "aa58c15e8b1a6f240c875739f3cd9a36", "text": "STATEMENT OF PROBLEM\nOutcomes of oral implant therapy have been described primarily in terms of implant survival rates and the durability of implant superstructures. Reports of patient-based outcomes of implant therapy have been sparse, and none of these studies have used oral-specific health status measures.\n\n\nPURPOSE\nThis study assessed the impact of implant-stabilized prostheses on the health status of complete denture wearers using patient-based, oral-specific health status measures. It also assessed the influence of preoperative expectations on outcome.\n\n\nMATERIAL AND METHODS\nThree experimental groups requesting replacement of their conventional complete dentures completed an Oral Health Impact Profile (OHIP) and a validated denture satisfaction scale before treatment. One group received an implant-stabilized prosthesis (IG), and 2 groups received new conventional complete dentures (CDG1 and CDG2). After treatment, all subjects completed the health status measures again; preoperative data were compared with postoperative data.\n\n\nRESULTS\nBefore treatment, satisfaction with complete dentures was low in all 3 groups. Subjects requesting implants (IG and CDG1) had high expectations for implant-stabilized prostheses. Improvement in denture satisfaction and OHIP scores was reported by all 3 groups after treatment. Subjects who received their preferred treatment (IG and CDG2 subjects) reported a much greater improvement than CDG1 subjects. Preoperative expectation levels did not appear to influence satisfaction with the outcomes of implant therapy in IG subjects.\n\n\nCONCLUSION\nSubjects who received implants (IG) that replaced conventional complete dentures reported significant improvement after treatment, as did subjects who requested conventional replacement dentures (CDG2). The OHIP appears useful in identifying patients likely to benefit from implant-stabilized prostheses.", "title": "" }, { "docid": "0d51dc0edc9c4e1c050b536c7c46d49d", "text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.", "title": "" }, { "docid": "359d3e06c221e262be268a7f5b326627", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" }, { "docid": "717d1c31ac6766fcebb4ee04ca8aa40f", "text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.", "title": "" }, { "docid": "ea04dad2ac1de160f78fa79b33a93b6a", "text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.", "title": "" }, { "docid": "e4b9dc5b34863144d80bb48e1ab992a7", "text": "As developmental scientists seek to index the strengths of adolescents and adopt the positive youth development (PYD) perspective, psychometrically sound measurement tools will be needed to assess adolescents’ positive attributes. Using a series of exploratory factor analyses and CFA models, this research creates short and very short versions of the scale used to measure the Five Cs of PYD in the 4-H Study of Positive Youth Development. We created separate forms for earlier versus later adolescence and ensured that items displayed sufficient conceptual overlap across forms to support tests of factorial invariance. We discuss implications for further scale development and advocate for the use of these convenient tools, especially in research and applications pertinent to the Five Cs model of PYD. DOI: https://doi.org/10.1111/jora.12039 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-108444 Accepted Version Originally published at: Geldhof, G J; Bowers, Edmond P; Boyd, Michelle J; Mueller, Megan K; Napolitano, Christopher M; Schmid, Kristina L; Lerner, Jacqueline V; Lerner, Richard M (2013). Creation of Short and Very Short Measures of the Five Cs of Positive Youth Development. Journal of Research on Adolescence, 24(1):163176. DOI: https://doi.org/10.1111/jora.12039 Running head: DEVELOPMENT OF SHORT PYD SCALES Creation of Short and Very Short Measures of the Five Cs of Positive Youth Development G. John Geldhof, Edmond P. Bowers, Michelle J. Boyd, Megan Kiely Mueller, Christopher M. Napolitano, Kristina L. Schmid Jacqueline V. Lerner, Richard M. Lerner DEVELOPMENT OF SHORT PYD SCALES 2", "title": "" }, { "docid": "040329beb0f4688ced46d87a51dac169", "text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.", "title": "" }, { "docid": "8e0b16179aabf850c09633df600e6a4a", "text": "Impacts of Informal Caregiving on Caregiver Employment, Health, and Family As the aging population increases, the demand for informal caregiving is becoming an ever more important concern for researchers and policy-makers alike. To shed light on the implications of informal caregiving, this paper reviews current research on its impact on three areas of caregivers’ lives: employment, health, and family. Because the literature is inherently interdisciplinary, the research designs, sampling procedures, and statistical methods used are heterogeneous. Nevertheless, we are still able to draw several conclusions: first, despite the prevalence of informal caregiving and its primary association with lower levels of employment, the affected labor force is seemingly small. Second, such caregiving tends to lower the quality of the caregiver’s psychological health, which also has a negative impact on physical health outcomes. Third, the implications for family life remain under investigated. The research findings also differ strongly among subgroups, although they do suggest that female, spousal, and intense caregivers tend to be the most affected by caregiving. JEL Classification: E26, J14, J46", "title": "" }, { "docid": "279d6de6ed6ade25d5ac0ff3d1ecde49", "text": "This paper explores the relationship between TV viewership ratings for Scandinavian's most popular talk show, Skavlan and public opinions expressed on its Facebook page. The research aim is to examine whether the activity on social media affects the number of viewers per episode of Skavlan, how the viewers are affected by discussions on the Talk Show, and whether this creates debate on social media afterwards. By analyzing TV viewer ratings of Skavlan talk show, Facebook activity and text classification of Facebook posts and comments with respect to type of emotions and brand sentiment, this paper identifes patterns in the users' real-world and digital world behaviour.", "title": "" }, { "docid": "b8ed09081032a790b1c5c4bb3afebfff", "text": "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. There are two components: i) The face proposal component computes face proposals via estimating facial key-points and the 3D transformation parameters for each predicted keypoint w.r.t. the 3D mean face model. ii) The face verification component computes detection results by refining proposals based on configuration pooling.", "title": "" }, { "docid": "42fa545010569b71d1211c413326f869", "text": "Occupational therapists working with Mexican and Mexican American populations may encounter traditional healing practices associated with curanderismo within a variety of practice settings. Curanderismo is a term referring to the practice of traditional healing in Latin American (Hispanic) cultures. This article reviews from the literature the different types of traditional healers (curanderos/as), the remedies recommended by traditional healers and common traditional illnesses treated. Traditional healing practices among Mexican and Mexican Americans may be as high as 50-75% in some parts of the United States. Further research is needed to investigate the effectiveness of curanderismo and its impact on quality of life, activities of daily living and overall social participation.", "title": "" }, { "docid": "4b22eaf527842e0fa41a1cd740ad9b40", "text": "Music transcription is the process of creating a written score of music from an audio recording. Musicians and musicologists use transcription to better understand music that may not have a written form, from improvised jazz solos to traditional folk music. Automatic music transcription introduces signal-processing algorithms to extract pitch and rhythm information from recordings. This speeds up and automates the process of music transcription, which requires musical training and is very time consuming even for experts. This thesis explores the still unsolved problem of automatic music transcription through an in-depth analysis of the problem itself and an overview of different techniques to solve the hardest subtask of music transcription, multiple pitch estimation. It concludes with a close study of a typical multiple pitch estimation algorithm and highlights the challenges that remain unsolved.", "title": "" }, { "docid": "18f95e8a2251e7bd582536c841070961", "text": "This paper proposes and implements the concept of flexible induction heating based on the magnetic resonant coupling (MRC) mechanism. In conventional induction heating systems, the variation of the relative position between the heater and workpiece significantly deteriorates the heating performance. In particular, the heating effect dramatically reduces with the increase of vertical displacement or horizontal misalignment. This paper utilizes the MRC mechanism to effectuate flexible induction heating; thus, handling the requirements of varying vertical displacement and horizontal misalignment for various cooking styles. Differing from a conventional induction heating, the proposed induction heating adopts one resonant coil in the heater and one resonant coil in the workpiece, which can significantly strengthen the coupling effect, and, hence, the heating effect. Both the simulation and experimental results are given to validate the feasibility and flexibility of the proposed induction heating.", "title": "" }, { "docid": "5f2b4caef605ab07ca070552e308d6e6", "text": "The objective of CLEF is to promote research in the field of multilingual system development. This is done through the organisation of annual evaluation campaigns in which a series of tracks designed to test different aspects of monoand cross-language information retrieval (IR) are offered. The intention is to encourage experimentation with all kinds of multilingual information access – from the development of systems for monolingual retrieval operating on many languages to the implementation of complete multilingual multimedia search services. This has been achieved by offering an increasingly complex and varied set of evaluation tasks over the years. The aim is not only to meet but also to anticipate the emerging needs of the R&D community and to encourage the development of next generation multilingual IR systems. These Working Notes contain descriptions of the experiments conducted within CLEF 2006 – the sixth in a series of annual system evaluation campaigns. The results of the experiments will be presented and discussed in the CLEF 2006 Workshop, 20-22 September, Alicante, Spain. The final papers revised and extended as a result of the discussions at the Workshop together with a comparative analysis of the results will appear in the CLEF 2006 Proceedings, to be published by Springer in their Lecture Notes for Computer Science series. As from CLEF 2005, the Working Notes are published in electronic format only and are distributed to participants at the Workshop on CD-ROM together with the Book of Abstracts in printed form. All reports included in the Working Notes will also be inserted in the DELOS Digital Library, accessible at http://delos-dl.isti.cnr.it. Both Working Notes and Book of Abstracts are divided into eight sections, corresponding to the CLEF 2006 evaluation tracks. In addition appendices are included containing run statistics for the Ad Hoc, Domain-Specific, GeoCLEF and QA tracks, plus a list of all participating groups showing in which track they took part. The main features of the 2006 campaign are briefly outlined here below in order to provide the necessary background to the experiments reported in the rest of the Working Notes.", "title": "" } ]
scidocsrr
38d2b61cf03b84ee81e408944d567e4e
Ontology Based Expert-System for Suspicious Transactions Detection
[ { "docid": "d7aeb8de7bf484cbaf8e23fcf675d002", "text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.", "title": "" } ]
[ { "docid": "ee6612fa13482f7e3bbc7241b9e22297", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "e09d45316d48894bcfb3c5657cd19118", "text": "In recent years, multiple-line acquisition (MLA) has been introduced to increase frame rate in cardiac ultrasound medical imaging. However, this method induces blocklike artifacts in the image. One approach suggested, synthetic transmit beamforming (STB), involves overlapping transmit beams which are then interpolated to remove the MLA blocking artifacts. Independently, the application of minimum variance (MV) beamforming has been suggested in the context of MLA. We demonstrate here that each approach is only a partial solution and that combining them provides a better result than applying either approach separately. This is demonstrated by using both simulated and real phantom data, as well as cardiac data. We also show that the STB-compensated MV beamfomer outperforms single-line acquisition (SLA) delay- and-sum in terms of lateral resolution.", "title": "" }, { "docid": "bfd23678afff2ac4cd4650cf46195590", "text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.", "title": "" }, { "docid": "39cc52cd5ba588e9d4799c3b68620f18", "text": "Using data from a popular online social network site, this paper explores the relationship between profile structure (namely, which fields are completed) and number of friends, giving designers insight into the importance of the profile and how it works to encourage connections and articulated relationships between users. We describe a theoretical framework that draws on aspects of signaling theory, common ground theory, and transaction costs theory to generate an understanding of why certain profile fields may be more predictive of friendship articulation on the site. Using a dataset consisting of 30,773 Facebook profiles, we determine which profile elements are most likely to predict friendship links and discuss the theoretical and design implications of our findings.", "title": "" }, { "docid": "aa3c4e267122b636eae557513900dd85", "text": "At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way to quantify whether the student has mastered a skill. A large amount of work has been done on building student models that can predict student performance on the next question. In this paper, we leverage this prior work with a new whento-stop policy that is compatible with any such predictive student model. Our results suggest that, when employed as part of our new predictive similarity policy, student models with similar predictive accuracies can suggest that substantially different amounts of practice are necessary. This suggests that predictive accuracy may not be a sufficient metric by itself when choosing which student model to use in intelligent tutoring systems.", "title": "" }, { "docid": "e0580a51b7991f86559a7a3aa8b26204", "text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.", "title": "" }, { "docid": "b103e091df051f4958317b3b7806fa71", "text": "We present a static, precise, and scalable technique for finding CVEs (Common Vulnerabilities and Exposures) in stripped firmware images. Our technique is able to efficiently find vulnerabilities in real-world firmware with high accuracy. Given a vulnerable procedure in an executable binary and a firmware image containing multiple stripped binaries, our goal is to detect possible occurrences of the vulnerable procedure in the firmware image. Due to the variety of architectures and unique tool chains used by vendors, as well as the highly customized nature of firmware, identifying procedures in stripped firmware is extremely challenging. Vulnerability detection requires not only pairwise similarity between procedures but also information about the relationships between procedures in the surrounding executable. This observation serves as the foundation for a novel technique that establishes a partial correspondence between procedures in the two binaries. We implemented our technique in a tool called FirmUp and performed an extensive evaluation over 40 million procedures, over 4 different prevalent architectures, crawled from public vendor firmware images. We discovered 373 vulnerabilities affecting publicly available firmware, 147 of them in the latest available firmware version for the device. A thorough comparison of FirmUp to previous methods shows that it accurately and effectively finds vulnerabilities in firmware, while outperforming the detection rate of the state of the art by 45% on average.", "title": "" }, { "docid": "dc53e2bf9576fd3fb7670b0860eae754", "text": "In the field of ADAS and self-driving car, lane and drivable road detection play an essential role in reliably accomplishing other tasks, such as objects detection. For monocular vision based semantic segmentation of lane and road, we propose a dilated feature pyramid network (FPN) with feature aggregation, called DFFA, where feature aggregation is employed to combine multi-level features enhanced with dilated convolution operations and FPN under the framework of ResNet. Experimental results validate effectiveness and efficiency of the proposed deep learning model for semantic segmentation of lane and drivable road. Our DFFA achieves the best performance both on Lane Estimation Evaluation and Behavior Evaluation tasks in KITTI-ROAD and take the second place on UU ROAD task.", "title": "" }, { "docid": "4960f2d2215dbc8cf746b4f1a22f6756", "text": "Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the networks are manually configured to achieve optimal results, and (3) the implementation of the network model is expensive in both cost and power. In this article, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. We show that a quantum computer can find high quality values of intra-layer connection weights in a tractable time as the complexity of the network increases, a high performance computer can find optimal layer-based topologies, and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.", "title": "" }, { "docid": "7403408ad427f9613110a4f40c693893", "text": "Recommending news items is traditionally done by term-based algorithms like TF-IDF. This paper concentrates on the benefits of recommending news items using a domain ontology instead of using a term-based approach. For this purpose, we propose Athena, which is an extension to the existing Hermes framework. Athena employs a user profile to store terms or concepts found in news items browsed by the user. Based on this information, the framework uses a traditional method based on TF-IDF, and several ontology-based methods to recommend new articles to the user. The paper concludes with the evaluation of the different methods, which shows that the new ontology-based method that we propose in this paper performs better (w.r.t. accuracy, precision, and recall) than the traditional method and, with the exception of one measure (recall), also better than the other considered ontology-based approaches.", "title": "" }, { "docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7", "text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.", "title": "" }, { "docid": "118738ca4b870e164c7be53e882a9ab4", "text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470", "title": "" }, { "docid": "8c0d3cfffb719f757f19bbb33412d8c6", "text": "In this paper, we present a parallel Image-to-Mesh Conversion (I2M) algorithm with quality and fidelity guarantees achieved by dynamic point insertions and removals. Starting directly from an image, it is able to recover the isosurface and mesh the volume with tetrahedra of good shape. Our tightly-coupled shared-memory parallel speculative execution paradigm employs carefully designed contention managers, load balancing, synchronization and optimizations schemes which boost the parallel efficiency with little overhead: our single-threaded performance is faster than CGAL, the state of the art sequential mesh generation software we are aware of. The effectiveness of our method is shown on Blacklight, the Pittsburgh Supercomputing Center's cache-coherent NUMA machine, via a series of case studies justifying our choices. We observe a more than 82% strong scaling efficiency for up to 64 cores, and a more than 95% weak scaling efficiency for up to 144 cores, reaching a rate of 14.7 Million Elements per second. To the best of our knowledge, this is the fastest and most scalable 3D Delaunay refinement algorithm.", "title": "" }, { "docid": "864c2987092ca266b97ed11faec42aa3", "text": "BACKGROUND\nAnxiety is the most common emotional response in women during delivery, which can be accompanied with adverse effects on fetus and mother.\n\n\nOBJECTIVES\nThis study was conducted to compare the effects of aromatherapy with rose oil and warm foot bath on anxiety in the active phase of labor in nulliparous women in Tehran, Iran.\n\n\nPATIENTS AND METHODS\nThis clinical trial study was performed after obtaining informed written consent on 120 primigravida women randomly assigned into three groups. The experimental group 1 received a 10-minute inhalation and footbath with oil rose. The experimental group 2 received a 10-minute warm water footbath. Both interventions were applied at the onset of active and transitional phases. Control group, received routine care in labor. Anxiety was assessed using visual analogous scale (VASA) at onset of active and transitional phases before and after the intervention. Statistical comparison was performed using SPSS software version 16 and P < 0.05 was considered significant.\n\n\nRESULTS\nAnxiety scores in the intervention groups in active phase after intervention were significantly lower than the control group (P < 0.001). Anxiety scores before and after intervention in intervention groups in transitional phase was significantly lower than the control group (P < 0.001).\n\n\nCONCLUSIONS\nUsing aromatherapy and footbath reduces anxiety in active phase in nulliparous women.", "title": "" }, { "docid": "58d7e76a4b960e33fc7b541d04825dc9", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" }, { "docid": "e07a731a2c4fa39be27a13b5b5679593", "text": "Ocean acidification is rapidly changing the carbonate system of the world oceans. Past mass extinction events have been linked to ocean acidification, and the current rate of change in seawater chemistry is unprecedented. Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate. Potential changes in species distributions and abundances could propagate through multiple trophic levels of marine food webs, though research into the long-term ecosystem impacts of ocean acidification is in its infancy. This review attempts to provide a general synthesis of known and/or hypothesized biological and ecosystem responses to increasing ocean acidification. Marine taxa covered in this review include tropical reef-building corals, cold-water corals, crustose coralline algae, Halimeda, benthic mollusks, echinoderms, coccolithophores, foraminifera, pteropods, seagrasses, jellyfishes, and fishes. The risk of irreversible ecosystem changes due to ocean acidification should enlighten the ongoing CO(2) emissions debate and make it clear that the human dependence on fossil fuels must end quickly. Political will and significant large-scale investment in clean-energy technologies are essential if we are to avoid the most damaging effects of human-induced climate change, including ocean acidification.", "title": "" }, { "docid": "4500c668414d0cb1ff18bb8ec00f1d8f", "text": "Governments around the world are increasingly utilising online platforms and social media to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. In this article, we show how the analysis of argumentative and dialogical structures allows for the principled identification of those issues that are central, controversial, or popular in an online corpus of debates. Although areas such as controversy mining work towards identifying issues that are a source of disagreement, by looking at the deeper argumentative structure, we show that a much richer understanding can be obtained. We provide results from using a pipeline of argument-mining techniques on the debate corpus, showing that the accuracy obtained is sufficient to automatically identify those issues that are key to the discussion, attracting proportionately more support than others, and those that are divisive, attracting proportionately more conflicting viewpoints.", "title": "" }, { "docid": "75233d6d94fec1f43fa02e8043470d4d", "text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.", "title": "" }, { "docid": "03c13e81803517d2be66e8bc25b7012c", "text": "Extractors and taggers turn unstructured text into entity-relation(ER) graphs where nodes are entities (email, paper, person,conference, company) and edges are relations (wrote, cited,works-for). Typed proximity search of the form <B>type=personNEAR company~\"IBM\", paper~\"XML\"</B> is an increasingly usefulsearch paradigm in ER graphs. Proximity search implementations either perform a Pagerank-like computation at query time, which is slow, or precompute, store and combine per-word Pageranks, which can be very expensive in terms of preprocessing time and space. We present HubRank, a new system for fast, dynamic, space-efficient proximity searches in ER graphs. During preprocessing, HubRank computesand indexes certain \"sketchy\" random walk fingerprints for a small fraction of nodes, carefully chosen using query log statistics. At query time, a small \"active\" subgraph is identified, bordered bynodes with indexed fingerprints. These fingerprints are adaptively loaded to various resolutions to form approximate personalized Pagerank vectors (PPVs). PPVs at remaining active nodes are now computed iteratively. We report on experiments with CiteSeer's ER graph and millions of real Cite Seer queries. Some representative numbers follow. On our testbed, HubRank preprocesses and indexes 52 times faster than whole-vocabulary PPV computation. A text index occupies 56 MB. Whole-vocabulary PPVs would consume 102GB. If PPVs are truncated to 56 MB, precision compared to true Pagerank drops to 0.55; incontrast, HubRank has precision 0.91 at 63MB. HubRank's average querytime is 200-300 milliseconds; query-time Pagerank computation takes 11 seconds on average.", "title": "" } ]
scidocsrr
4562553f10e039c1f88b0b00caa38a37
Parallel matrix factorization for low-rank tensor completion
[ { "docid": "36f2be7a14eeb10ad975aa00cfd30f36", "text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.", "title": "" }, { "docid": "d97e9181f01f195c0b299ce8893ddbbd", "text": "Linear algebra is a powerful and proven tool in Web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score Web pages based on the principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the Web graph. We propose and test a new methodology that uses multilinear algebra to elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the Web pages while the third dimension adds the anchor text. We then use the rank-1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative Web pages.", "title": "" } ]
[ { "docid": "e66f7a7e3fcb833edde92bba24cb7145", "text": "Essential oils are complex blends of a variety of volatile molecules such as terpenoids, phenol-derived aromatic components, and aliphatic components having a strong interest in pharmaceutical, sanitary, cosmetic, agricultural, and food industries. Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, and other medicinal properties such as analgesic, sedative, anti-inflammatory, spasmolytic, and locally anaesthetic remedies. In this review their nanoencapsulation in drug delivery systems has been proposed for their capability of decreasing volatility, improving the stability, water solubility, and efficacy of essential oil-based formulations, by maintenance of therapeutic efficacy. Two categories of nanocarriers can be proposed: polymeric nanoparticulate formulations, extensively studied with significant improvement of the essential oil antimicrobial activity, and lipid carriers, including liposomes, solid lipid nanoparticles, nanostructured lipid particles, and nano- and microemulsions. Furthermore, molecular complexes such as cyclodextrin inclusion complexes also represent a valid strategy to increase water solubility and stability and bioavailability and decrease volatility of essential oils.", "title": "" }, { "docid": "4ac3affdf995c4bb527229da0feb411d", "text": "Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.\n Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.\n We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.", "title": "" }, { "docid": "2f2b45468b05bf5c7b4666006df1389b", "text": "If an outbound flow is observed at the boundary of a protected network, destined to an IP address within a few addresses of a known malicious IP address, should it be considered a suspicious flow? Conventional blacklisting is not going to cut it in this situation, and the established fact that malicious IP addresses tend to be highly clustered in certain portions of IP address space, should indeed raise suspicions. We present a new approach for perimeter defense that addresses this concern. At the heart of our approach, we attempt to infer internal, hidden boundaries in IP address space, that lie within publicly known boundaries of registered IP netblocks. Our hypothesis is that given a known bad IP address, other IP address in the same internal contiguous block are likely to share similar security properties, and may therefore be vulnerable to being similarly hacked and used by attackers in the future. In this paper, we describe how we infer hidden internal boundaries in IPv4 netblocks, and what effect this has on being able to predict malicious IP addresses.", "title": "" }, { "docid": "352ae5b752217faa02c20a93f110bcd6", "text": "This paper serves to prove the thesis that a computational trick can open entirely new approaches to theory. We illustrate by describing such random matrix techniques as the stochastic operator approach, the method of ghosts and shadows, and the method of “Riccatti Diffusion/Sturm Sequences,” giving new insights into the deeper mathematics underneath random matrix theory.", "title": "" }, { "docid": "9c35b7e3bf0ef3f3117c6ba8a9ad1566", "text": "Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data, asynchronous parallelization of SGD has also been studied. Then, a natural question is whether these techniques can be seamlessly integrated with each other, and whether the integration has desirable theoretical guarantee on its convergence. In this paper, we provide our formal answer to this question. In particular, we consider the asynchronous parallelization of SGD, accelerated by leveraging variance reduction, coordinate sampling, and Nesterov’s method. We call the new algorithm asynchronous accelerated SGD (AASGD). Theoretically, we proved a convergence rate of AASGD, which indicates that (i) the three acceleration methods are complementary to each other and can make their own contributions to the improvement of convergence rate; (ii) asynchronous parallelization does not hurt the convergence rate, and can achieve considerable speedup under appropriate parameter setting. Empirically, we tested AASGD on a few benchmark datasets. The experimental results verified our theoretical findings and indicated that AASGD could be a highly effective and efficient algorithm for practical use.", "title": "" }, { "docid": "818c075d79a51fcab4c38031f14a98ef", "text": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "title": "" }, { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "f1f424a703eefaabe8c704bd07e21a21", "text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.", "title": "" }, { "docid": "34b7073f947888694053cb421544cb37", "text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "title": "" }, { "docid": "a73f07080a2f93a09b05b58184acf306", "text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.", "title": "" }, { "docid": "619f38266a35e76a77fb4141879e1e68", "text": "In article various approaches to measurement of efficiency of innovations and the problems arising at their measurement are considered, the system of an indistinct conclusion for the solution of a problem of obtaining recommendations about measurement of efficiency of innovations is offered.", "title": "" }, { "docid": "df11dd8d4a4945f37ad3771cc6655120", "text": "In this paper, we consider the problem of open information extraction (OIE) for extracting entity and relation level intermediate structures from sentences in open-domain. We focus on four types of valuable intermediate structures (Relation, Attribute, Description, and Concept), and propose a unified knowledge expression form, SAOKE, to express them. We publicly release a data set which contains 48,248 sentences and the corresponding facts in the SAOKE format labeled by crowdsourcing. To our knowledge, this is the largest publicly available human labeled data set for open information extraction tasks. Using this labeled SAOKE data set, we train an end-to-end neural model using the sequence-to-sequence paradigm, called Logician, to transform sentences into facts. For each sentence, different to existing algorithms which generally focus on extracting each single fact without concerning other possible facts, Logician performs a global optimization over all possible involved facts, in which facts not only compete with each other to attract the attention of words, but also cooperate to share words. An experimental study on various types of open domain relation extraction tasks reveals the consistent superiority of Logician to other states-of-the-art algorithms. The experiments verify the reasonableness of SAOKE format, the valuableness of SAOKE data set, the effectiveness of the proposed Logician model, and the feasibility of the methodology to apply end-to-end learning paradigm on supervised data sets for the challenging tasks of open information extraction.", "title": "" }, { "docid": "b89f2c70e3c9e2258c2cdf3f9b2bfb1b", "text": "One-size-fits-all protocols are hard to achieve in Byzantine fault tolerance (BFT). As an alternative, BFT users, e.g., enterprises, need an easy and efficient method to choose the most convenient protocol that matches their preferences best. The various BFT protocols that have been proposed so far differ significantly in their characteristics and performance which makes choosing the ‘preferred’ protocol hard. In addition, if the state of the deployed system is too fluctuating, then perhaps using multiple protocols at once is needed; this requires a dynamic selection mechanism to move from one protocol to another. In this paper, we present the first BFT selection model and algorithm that can be used to choose the most convenient protocol according to user preferences. The selection algorithm applies some mathematical formulas to make the selection process easy and automatic. The algorithm operates in three modes: Static, Dynamic, and Heuristic. The Static mode addresses the cases where a single protocol is needed; the Dynamic mode assumes that the system conditions are quite fluctuating and thus requires runtime decisions, and the Heuristic mode is similar to the Dynamic mode but it uses additional heuristics to improve user choices. We give some examples to describe how selection occurs. We show that our approach is automated, easy, and yields reasonable results that match reality. To the best of our knowledge, this is the first work that addresses selection in BFT.", "title": "" }, { "docid": "a1ebca14dcf943116b2808b9d954f6f4", "text": "In this work, the human parsing task, namely decomposing a human image into semantic fashion/body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion/body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches 64.38 percent by our ATR framework, significantly higher than 44.76 percent based on the state-of-the-art algorithm [28].", "title": "" }, { "docid": "140266d9b788417d62ceee20c38f5e92", "text": "Statistical dependencies in the responses of sensory neurons govern both the amount of stimulus information conveyed and the means by which downstream neurons can extract it. Although a variety of measurements indicate the existence of such dependencies, their origin and importance for neural coding are poorly understood. Here we analyse the functional significance of correlated firing in a complete population of macaque parasol retinal ganglion cells using a model of multi-neuron spike responses. The model, with parameters fit directly to physiological data, simultaneously captures both the stimulus dependence and detailed spatio-temporal correlations in population responses, and provides two insights into the structure of the neural code. First, neural encoding at the population level is less noisy than one would expect from the variability of individual neurons: spike times are more precise, and can be predicted more accurately when the spiking of neighbouring neurons is taken into account. Second, correlations provide additional sensory information: optimal, model-based decoding that exploits the response correlation structure extracts 20% more information about the visual scene than decoding under the assumption of independence, and preserves 40% more visual information than optimal linear decoding. This model-based approach reveals the role of correlated activity in the retinal coding of visual stimuli, and provides a general framework for understanding the importance of correlated activity in populations of neurons.", "title": "" }, { "docid": "da7beedfca8e099bb560120fc5047399", "text": "OBJECTIVE\nThis study aims to assess the relationship of late-night cell phone use with sleep duration and quality in a sample of Iranian adolescents.\n\n\nMETHODS\nThe study population consisted of 2400 adolescents, aged 12-18 years, living in Isfahan, Iran. Age, body mass index, sleep duration, cell phone use after 9p.m., and physical activity were documented. For sleep assessment, the Pittsburgh Sleep Quality Index questionnaire was used.\n\n\nRESULTS\nThe participation rate was 90.4% (n=2257 adolescents). The mean (SD) age of participants was 15.44 (1.55) years; 1270 participants reported to use cell phone after 9p.m. Overall, 56.1% of girls and 38.9% of boys reported poor quality sleep, respectively. Wake-up time was 8:17 a.m. (2.33), among late-night cell phone users and 8:03a.m. (2.11) among non-users. Most (52%) late-night cell phone users had poor sleep quality. Sedentary participants had higher sleep latency than their peers. Adjusted binary and multinomial logistic regression models showed that late-night cell users were 1.39 times more likely to have a poor sleep quality than non-users (p-value<0.001).\n\n\nCONCLUSION\nLate-night cell phone use by adolescents was associated with poorer sleep quality. Participants who were physically active had better sleep quality and quantity. As part of healthy lifestyle recommendations, avoidance of late-night cell phone use should be encouraged in adolescents.", "title": "" }, { "docid": "2729749e10b5c6f055b10eebb0c5f179", "text": "An emerging solution for prolonging the lifetime of energy constrained relay nodes in wireless networks is to avail the ambient radio-frequency (RF) signal and to simultaneously harvest energy and process information. In this paper, an amplify-and-forward (AF) relaying network is considered, where an energy constrained relay node harvests energy from the received RF signal and uses that harvested energy to forward the source information to the destination. Based on the time switching and power splitting receiver architectures, two relaying protocols, namely, i) time switching-based relaying (TSR) protocol and ii) power splitting-based relaying (PSR) protocol are proposed to enable energy harvesting and information processing at the relay. In order to determine the throughput, analytical expressions for the outage probability and the ergodic capacity are derived for delay-limited and delay-tolerant transmission modes, respectively. The numerical analysis provides practical insights into the effect of various system parameters, such as energy harvesting time, power splitting ratio, source transmission rate, source to relay distance, noise power, and energy harvesting efficiency, on the performance of wireless energy harvesting and information processing using AF relay nodes. In particular, the TSR protocol outperforms the PSR protocol in terms of throughput at relatively low signal-to-noise-ratios and high transmission rates.", "title": "" }, { "docid": "7bf137d513e7a310e121eecb5f59ae27", "text": "BACKGROUND\nChildren with intellectual disability are at heightened risk for behaviour problems and diagnosed mental disorder.\n\n\nMETHODS\nThe present authors studied the early manifestation and continuity of problem behaviours in 205 pre-school children with and without developmental delays.\n\n\nRESULTS\nBehaviour problems were quite stable over the year from age 36-48 months. Children with developmental delays were rated higher on behaviour problems than their non-delayed peers, and were three times as likely to score in the clinical range. Mothers and fathers showed high agreement in their rating of child problems, especially in the delayed group. Parenting stress was also higher in the delayed group, but was related to the extent of behaviour problems rather than to the child's developmental delay.\n\n\nCONCLUSIONS\nOver time, a transactional model fit the relationship between parenting stress and behaviour problems: high parenting stress contributed to a worsening in child behaviour problems over time, and high child behaviour problems contributed to a worsening in parenting stress. Findings for mothers and fathers were quite similar.", "title": "" }, { "docid": "2b97be612e11b8fefc1f8dcf8ff47603", "text": "Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.", "title": "" } ]
scidocsrr
cb5ecbe2df35a4b18cd4f423304f26c9
Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "57a48d8c45b7ed6bbcde11586140f8b6", "text": "We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.", "title": "" }, { "docid": "acc526dd0d86c5bf83034b3cd4c1ea38", "text": "We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.", "title": "" } ]
[ { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "fb46f67ba94cb4d7dd7620e2bdf5f00e", "text": "We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50% of mining power.\n Our design follows a recent provably secure proof-of-work/proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically.\n We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.", "title": "" }, { "docid": "1fcaa9ebde2922c13ce42f8f90c9c6ba", "text": "Despite advances in HIV treatment, there continues to be great variability in the progression of this disease. This paper reviews the evidence that depression, stressful life events, and trauma account for some of the variation in HIV disease course. Longitudinal studies both before and after the advent of highly active antiretroviral therapies (HAART) are reviewed. To ensure a complete review, PubMed was searched for all English language articles from January 1990 to July 2007. We found substantial and consistent evidence that chronic depression, stressful events, and trauma may negatively affect HIV disease progression in terms of decreases in CD4 T lymphocytes, increases in viral load, and greater risk for clinical decline and mortality. More research is warranted to investigate biological and behavioral mediators of these psychoimmune relationships, and the types of interventions that might mitigate the negative health impact of chronic depression and trauma. Given the high rates of depression and past trauma in persons living with HIV/AIDS, it is important for healthcare providers to address these problems as part of standard HIV care.", "title": "" }, { "docid": "14f3ecd814f5affe186146288d83697c", "text": "Accidental intra-arterial filler injection may cause significant tissue injury and necrosis. Hyaluronic acid (HA) fillers, currently the most popular, are the focus of this article, which highlights complications and their symptoms, risk factors, and possible treatment strategies. Although ischemic events do happen and are therefore important to discuss, they seem to be exceptionally rare and represent a small percentage of complications in individual clinical practices. However, the true incidence of this complication is unknown because of underreporting by clinicians. Typical clinical findings include skin blanching, livedo reticularis, slow capillary refill, and dusky blue-red discoloration, followed a few days later by blister formation and finally tissue slough. Mainstays of treatment (apart from avoidance by meticulous technique) are prompt recognition, immediate treatment with hyaluronidase, topical nitropaste under occlusion, oral acetylsalicylic acid (aspirin), warm compresses, and vigorous massage. Secondary lines of treatment may involve intra-arterial hyaluronidase, hyperbaric oxygen therapy, and ancillary vasodilating agents such as prostaglandin E1. Emergency preparedness (a \"filler crash cart\") is emphasized, since early intervention is likely to significantly reduce morbidity. A clinical summary chart is provided, organized by complication presentation.", "title": "" }, { "docid": "df833f98f7309a5ab5f79fae2f669460", "text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.", "title": "" }, { "docid": "6fdd0fdbf609832138bfd1a5b6ebb3e7", "text": "6 798;:+<>= ?@=BAC<;8>:D:EAC?GFH= = I JK79FKLNMO?@:K=BP+Q I R$7K?@MN8 A LTS;A ?US>= VW:9QBXTY = Z[=\\<BVTLOM]S@S>LN=1:9A ?GFH= =\\I ?US>7EP^MO=BP+F`_ S>:K=&F97KMNLNP^=\\<>?GQ a3R 79?@MN8b<>= 8cY Q RGRG= IEP^=\\<>? dTefRGJKLOMg8cMOShACIEP&=\\i^J9LOMN8\\M]S 8\\Q[LOLNA FHQC<;ACS>MOZj= k9L]S>=\\<>MOI9lm:EAC? FH= =\\I179?@=BP anQ <hRmA opMOI9lq<>=B8\\Q RGRG= IEP^=\\<>? VKMNIrA P9P^MOS>MOQ I&S>QqS>:9=sA 7^Y S>Q RmACS>MN8t8\\LNA ?@?@M]kE8 ACS>MOQ[I Q a R 79?@MN8hMOI`S>Qs?USf_^LO=t8 AuS>= l[QC<>MN=\\?hF9A ?@=BPmQ[I =\\ipS@<;A 8cS>=BP%AC7EPKMOQ&an=BACS>7^<>= ? d1vw:KMN?sJEACJH=\\< ?@7K<>Zj=\\_^?t<>= ?@= AC<;8>: MNI`S>Q R 79?@MN8 A LjS;AC?US>=[V[<>=\\Z^MO=\\XW?xR 79?@MN8x<>=B8\\Q[RGRG=\\IEPK=c<y<>= ?@= AC<;8>:yV`ACIEPzQ[7^S@Y LOMOI9= ?{JK<>Q RGMN?@MOIKlzPKM]<>=B8cS>MOQ[IK? d|efIqJ9AC<@S>MN8\\79LNAu<BV`Xw=WLO=BAC<>IK=BPqS>:EACS}PK=cY RGQ l <;A JK:9MN8&ACIEP-JH=c<>?@Q[IEACLOMOS~_ aA[8;S>Q <>?m:EABZj=&FH=\\= I-?@:KQBXWI-S>Q FH= a€A[8cS>QC<>?MOIK‚979= I98\\MOI9l2R$79?@MN8bJK<>=\\an=\\<>= IE8c=[d„ƒ9QC<RGQ`QpP…V}S>:K=&RmA MOI a€A[8cS>QC<>?&AC<>=&S>=\\RGJHQKVwS>Q IEA LOM]Sf_`V†P^MO?US>MNI98cS>MOZj= IK= ?@?mQ a3<>:`_pS>:9R‡ACIEP JKMOS;8>: :K= MOl[:`SBd 1. INTRODUCTION e~I S>:K=qJEAC?UStSfX}= LOZj=$_`=BAu<>? VyS>:K=\\<>=q:9A ?3FH= = I MOI`S>=\\<>= ?USzMOI2S>:K=qPK=cY Zj=\\LNQ J9RG= I`S$Q atS>=B8>:9IKMNˆp7K= ?$S>:9ACSGJ^<>QCZpMNPK= JH=\\<>?@Q[I9A LOMO?@=BP 8\\Q IpS>=\\IpS S>Q%7K?@=\\<>? d(vw:9=&Sf_^JH=Q asA JKJ9LOMN8 ACS>MOQ I9?b:EABZj=&MOIE8cLN79PK=BP+kEL]S>=\\<>MOI9l QCa{IK=\\XW?†RG= ?@?>A l = ? V…JK<>= ?@=\\IpS>MOIKlqLOMN?US>?hQ a|?US>Q <>MO= ?hQ < AC<@S~XwQ <>obS>:EAuS A&7K?@=\\<sRmAB_2FH=GMOI`S>=\\<>= ?US>=BP MOI‰VŠA I9P ?@Q1Q[I‰dr6 Q[?US Q aTS>:K= ?@=bA J^Y JKLNMN8 AuS>MOQ[I9?h:EABZj=3ACJ9JKLNMO=BP A S>=B8>:9I9MNˆ`79= o^IKQBXWI&A ?b‹U8cQ[LOLNA FHQ <;AuS>MOZj= k9LOS>=c<>MNIKl[Œ^d vw:9MO?qMOIpZjQ[LOZj=\\?q8\\Q LNLO=B8;S>MNIKl QCS>:9=\\<$79?@=\\<>? ŠQ J9MOI9MOQ[IK?qQ a :KQuX l[Q`QpPmQ <}79?@=\\an79L…A IGM]S>= RŽMO? VKA IEP$S>:K= Iq<;A IKopMNIKlzMOS>=\\RG?{F9A ?@=BP Q I S>:9MO?WMOIKa€QC<>RmACS>MOQ[Ia€Q <†J^<>= ?@= I`S;ACS>MOQ[IS>Q$S>:9=t7K?@=\\<Bd D:KMOLN=M]SqRmA _%FH=AC<>l 79=BP%S>:EAuS$S>:K=\\<>=:9A ?qFH=\\= I ?@Q[RG=b?@7E8 8c= ?@? XWM]S>: S>:9MO?$S>=B8>:9IKMgˆ`7K=[VŠS>:9=\\<>=bMO?qR$798;:%<>Q`Q Ra€QC<GMORGJK<>QuZj= RG=\\IpSBd ‘xAu<;A LOLO= L`S>QwS>:9={PK= Zj=\\LNQ J9RG= I`S…QCa98cQ[LOLNA FHQ <;AuS>MOZj=}kEL]S>=\\<>MOIKl†:9A ?‰FH= = I 8\\Q I`S>= I`S@Y~F9A ?@=BP+kEL]S>=\\<>MOI9l^d’vw:9MO?bMN?A I„ACJ9J^<>QjA[8>: S>:9ACSbS@<>MN=\\?bS>Q =\\ipS@<;A 8cSq7K?@=\\a€7KLwMOIKa€QC<>RmACS>MOQ[I ag<>Q[R“S>:9=mM]S>= RG? QCaWS>:9=b8\\Q[LOLO=B8cS>MOQ I S>:9ACSbAC<>= l[Q`QpP MOIEP^MN8 ACS>QC<>?bQCatS>:9=\\MO<m7K?@=\\a€7KLOI9= ?@?qanQ <A27K?@=\\<BdDe~S MO?t8cLNQ ?@= L]_1<>= LNACS>=BP&S>QqS>:K=zk9= LNP&Q a{MOIKa€QC<>RmACS>MOQ[I&<>=cS@<>MN=\\Z[ACL”VHXW:KMN8;: ACMNRG?WS>QmPK=\\Zj= LOQ[J1FH=\\S@S>=c<†S>= 8;:KI9MNˆ`79= ?WS>QGLOQp8 ACS>= PKQp8\\7KRG= I`S>?wS>:EAuS ?>AuS>MN?Uag_1A$7K?@=\\<B ?WMNI^a€QC<>RmACS>MOQ[I1IK= =BP‰d • 7^<@<>= I`S>LO_`V{RGQ[?USqR$79?@MN8G<>=B8\\Q RGRG= IEP^=\\<$?@=c<>Z^MN8\\=\\?mAC<>=&F9A ?@=BP Q[I =BP^M]S>Q <>MNA LTPKACS;ApVx<>=B8\\Q[RGRG=\\IEP9AuS>MOQ[I9?zl LN= A I9= P an<>Q[R“S>:9=Ge~I`S>=\\<>I9=\\S 7K?@=\\<x8\\Q RGR$7KI9M]Sf_`V\\A I9P FK<>QBXW?@MOI9l†JEAuS@S>=\\<>I9? dy– QBXw=\\Zj=\\<BVuM]S—MO?‰<>=B8\\Q[lCY IKMN?@= P-S>:EAuS 8c7K<@<>= I`SA JKJK<>QjA 8;:K= ?m:EABZj=1MORGJHQ <@S;ACI`SmLNMORGM]S;ACS>MOQ I9? V MOIE8cLN79PKMOI9lzMOIEA PK=Bˆ`79ACS>=W<;ABXDP9ACS;AG˜MOIGS>:9=h8 A ?@=†Q ay=BP^MOS>QC<>MNA L…MOIKa€QC<@Y RmAuS>MNQ I…™;V—LgA 8;o Q aTˆp79A LOM]Sf_28\\Q IpS@<>Q Lh˜MOI S>:9=m8 AC?@=GQ aT7K?@=\\<sJK<>=\\an=\\<@Y = I98\\= ?c™;V…ACIEP LNA 8;oQ aŠ79?@=c< J^<>=\\a€=\\<>=\\IE8\\= ?†a€QC< IK=\\Xš<>= 8\\Q <;P^MNIKl[? dW›t=cY <>MOZpMOI9lGa€=BAuS>7K<>= ?han<>Q[RœS>:9=zR 79?@MN83MOS>?@=\\LOa~V…<;ACS>:9=c<hS>:9A I&<>= L]_^MOI9lmQ[I 8\\7K?US>Q[RG=\\< FH= :9AuZpMOQ[7^<qMO?$J9AC<@S>MN8\\7KLgAu<>L]_ MORGJHQ <@S;A I`Ssa€QC<qMOI`S@<>QpPK7E8;Y MOI9l I9=\\X„R 79?@MN8 dŠD<>=B8\\Q RGRG= IEP^=\\<w?U_^?US>= RžXwQ[7KLNPbIK= Zj=\\<T?@7Kl[l = ?US IK=\\X AC<@S>MO?US>?xF9A ?@=BP3Q[IKL]_zQ[I 8\\79?US>Q RG=\\< FH= :9ABZ^MOQ 7K<BVCM]a9IKQ 8\\7K?US>Q[RG=\\< = Z[=\\<zMOI9M]S>MNA LOL]_ ?@= LO=B8;S>=BP2S>:9= I9=\\XžAu<@S>MN?USBdGhI9QCS>:9=\\<tLNMORGM]S;ACS>MOQ I%MO? S>:K= XTA _1S>:K= <>=B8\\Q RGRG= IEPKACS>MOQ[IK?tAu<>=$JK<>=\\?@= I`S>=BP‰ŸhRGQ[?US3?U_^?US>= RG? 7K?@=sIKQqRGQ <>=tS>:9A I1Aq?@MORGJKLN=3LOMO?UShQ ax<>=B8\\QC<;PKMOI9l ? dWƒ97^<@S>:9=\\<BV^S>:9=\\<>= Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. 8 2002 IRCAM Centre Pompidou :9A ?3FH= =\\I2LOM]S@S>LO=G=\\¡…Q <@StS>Q79?@=qopIKQuXWLO=BP^l[=$ag<>Q[R¢R 79?@MN8 JK?U_K8;:KQ[L]Y Q l _G<>= ?@= AC<;8>:S>QzMOIKa€QC<>R£S>:9=h8>:9Q[MN8\\=hQ aya€=BAuS>7K<>= ?}S>Qs=\\ipS@<;A[8;S}an<>Q R AC7EP^MNQ^VEQC<†S>Q kEL]S>=\\<WR$7K?@MN8 ?@= LO=B8cS>MOQ[IK? d e~IS>:KMN?TJ9A JH=\\<BV^Xw=h<>= J9:^<;A ?@=tS>:9=h<>= ?@=BAC<;8>:1ˆ`79=\\?US>MNQ IXWM]S>:&As?@JH=\\Y 8cMOkE8 a€Qp8\\7K?WQ[I R 79?@MN8†<>= 8\\Q[RGRG= I9PK=\\<}?U_^?US>= RG? VKA IEP8\\Q[IK?@MNPK=\\<WS>:9= RmACMOI2aA 8cS>Q <>?sS>:EAuS3XwQ[7KLNP%AC¡…=B8;SsS>:K=q?@7E8\\8\\= ?@?sQ aTR 79?@MN8 <>=B8\\Q RqY RG=\\IEPK=c<‰S>= 8;:KI9Q[LOQ l _`d— =x<>= ZpMN=cX2X}Q <>o ag<>Q[RšAwZ[Au<>MN=cSf_tQCa^?@Q[7K<;8c= ? V MOI98\\LO7EPKMOIKls<>= ?@= AC<;8>:mag<>Q[RŽS>:9=WkE= LNPK?{Q ayJ9?U_K8>:9Q[LOQ l _mACIEPGRmAC<>oj=cS@Y MOIKl†S>:9ACS <>= LNAuS>= ? S>QhR 79?@MN8 ACL[S;A ?US>=[dx†anS>=c<ŠA[PKP^<>= ?@?@MOI9lWS>:9=\\?@=}RmA MOI a€A[8cS>QC<>?†A IEPPKMO?>8\\7K?@?@MNIKl S>:K=t8\\Q IE8\\LO79?@MOQ I9?T<>=BA[8>:9= Pban<>Q R£S>:9=hZ[Au<@Y MOQ 79?tF^<;A IE8>:9=\\?tQ a|<>= ?@=BAu<;8;:r<>= LNACS>=BP1S>QGS>:9MO?tJ^<>Q[F9LO= R&V9Xw=qP^MO?US>MNLOL A2?@=\\SbQ a3l 79MNPK= LOMOI9=\\?AC?GXw= LOLhA ?bˆ`79= ?US>MOQ[IK?GS>:EACSG<>=\\RmA MOI-S>Q2FH= <>=\\?@Q[LOZj=BP…d| =t<>= LNACS>=tS>:KMN?TS>Q$S>:K=3A MORG?WQ a Q[7K<W<>= ?@= AC<;8>:&JK<>Qu¤U=B8;SBd 2. THE PROBLEM ƒ^<>Q[R¥S>:9=W79?@=c<B ?|JHQ[MOI`S{Q a‰ZpMN=cX3VjS>:9=TJK7K<>JHQ[?@=WQCa—AtR 79?@MN8}<>=B8\\Q RqY RG=\\IEPK=c<T?U_^?US>= R¦MO?WS>Q <>=B8\\Q[RGRG=\\IEPGR$7K?@Mg8†S>:EACSTS>:K=t79?@=c<†XWMOLOL‰FH= MOI`S>=\\<>= ?US>= PqMNI‰dxefIGQC<;PK=\\<|a€QC<{S>:K=W79?@=\\<|S>QtXTACIpS|S>Q379?@=TS>:K=T?U_K?US>=\\R M]S†R$7K?USTFH=3?@MORGJ9LO= S>Qq7K?@=[V^XWMOS>: A$RGMOI9MOR$7KR£QCaxMOI9J97^ST<>=Bˆ`79M]<>=BP ag<>Q[R’S>:9=W79?@=c<BdŠhL]S>=\\<>I9ACS>MOZj= L]_`VjS>:K=\\<>=TR$7K?US{FH=†A38\\LO=BAu<TACIEPqQ[FpZpM]Y Q 79?TMOIE8c= I`S>MNZ[=hS>Q S>:K= 7K?@=\\<TS>:EAuSTRGQ <>=h=\\¡…Q <@STMOIJK<>QuZpMgP^MOI9lsMNIKJ97^S XWMOLOL—LO=BA PbS>QqFH=cS@S>=\\<w<>=B8\\Q RGRG= IEPKACS>MOQ[IK? d|vw:9=t7K?@=\\<WRmA _GXTACIpSTS>Q <>=cS@<>MN=\\Zj=3R$7K?@Mg8hFEA ?@= P Q[IJK<>=\\an=\\<>= IE8c= ? VE?US~_KLO=3QC<hRGQ`QpP…d 3. PSYCHOLOGICAL FACTORS AND MUSICAL TASTE  :9= I PK= ?@MOl[IKMOI9lGA$?U_^?US>= RžS>:EACST<>= 8\\Q[RGRG= I9PK?wR 79?@MN8 V^M]S†MN?W7K?@=\\Y an79L†S>Q%LO=BAC<>I ag<>Q[R§=\\i^MO?US>MOI9l <>= ?@= AC<;8>:DMNI`S>Q a€A[8cS>QC<>?bS>:9ACSAC¡…= 8cS R 79?@MN8 ACL^S;AC?US>=[dŠefIsS>:9MO?Š?@=B8cS>MOQ[IqX}=T?@7K<>Zj=\\_z<>= ?@= AC<;8>:qS>:EACS|:9A ?ŠQp8cY 8c7K<@<>=BP MOIbS>:9=tLNA ?USh= MOl[:`S~_b_j=BAC<>?WQ[IR$7K?@Mg8\\A L‰JK<>=\\an=\\<>= IE8c= ? d}6 Q[?US QCahS>:K=q<>= ?@79L]S>?$8\\Q[RG=Gan<>Q R ̈X}Q <>o%F`_2?@Qp8\\MNA LTJK?U_98>:9Q LOQ[l[MO?US>? V{F97^S ?@Q RG=t8\\Q RG= ?}ag<>Q[R£S>:K= RGQ <>=hA JKJ9LOMO=BPbk9= LNPGQ axP^= RGQ[lC<;A JK:9MN8\\?}anQ < RmAu<>oj=\\S>MOI9l^d © Q[RG=wQCaES>:9=w<>=\\?@=BAC<;8>:G8\\M]S>=BPsJ979FKLOMN?@:K=BP FH=ca€Q <>=3a «j¬ ­t?@:KQBXw=BPq8c79L]Y S>7^<;A LtFKMNA ?@= ? dŽefI„A P9P^MOS>MOQ I S>:9=\\<>=1XTAC?1A%?US@<>Q[IKl F9MNA ? A ljACMOI9?US JHQ J97KLgAu<†R$79?@MN8 V`anQ <†=\\iKA RGJKLO=[V^Q[I9=tA 7KS>:KQ <†P^=\\kEIK=BPbM]S†AC?b‹~R$7K?@Mg8 S>:9ACSzMO?t<;A IKoj=BPrF`_r8c<>M]S>MN8\\?sA ?tS;A X†P^<@_`V‰FEACIEACL”V A I9P MOI9?@MOJKMgP^Œ ® «C ̄”d °†= ?@=BAu<;8;:K=\\<>?WFH= LOMO= Zj=BPmS>:EACSWQ I9=3MORGJHQ <@S;ACI`STJ97^<>JHQ[?@=tQCaxR$7K?@Mg8\\A L = PK7E8\\ACS>MOQ[I%XwA ? S>Q%±n2 3E ́Uμ ¶ ·A1?US>7EP^= I`SB ?sR 79?@MN8 ACL{S;A ?US>= d1– QBXTY =\\Zj=\\<BV‰S>:9=$=ciKJH=c<>MNRG=\\IpS>?tA IEP1S>:9=\\MO< <>= ?@79L]S>?zA JKJH=BAC<3S>QFH=q?@Q[7KIEP ACIEP+8 A I FH=79?@=BP A ?qA ?US;Au<@S>MNIKl JHQ MNI`S a€Q <q=\\i^JH=\\<>MORG= I`S>?$RGQ <>= S;Au<>l[=\\S>= P S>Q$R$7K?@MN8h<>=B8\\Q RGRG= IEP^=\\<T?U_^?US>= RG? d 3.1 Personality, Demographics and Music Preference e ̧SŠ:EAC?xFH= = I$?@:KQBXWI$S>:EAuSŠ8\\=\\<@S;ACMNIqA ?@JH=B8cS>?ŠQCaHJH=\\<>?@Q[I9A LOM]Sf_$AC<>=T8cQ <@Y <>=\\LgAuS>=BP XWM]S>:qR 79?@MN8}JK<>=ca€=\\<>= I98\\=[dxe~I <>= ?@= AC<;8>:qJ97KF9LOMO?@:9=BPsMNIbaB« 1[«^V o{7K<@SG7K?@=BP%S>:9= MOI`S@<>QuZj=\\<@S$Zj=\\<>?@79? =\\ipS@<;ABZj=\\<@SGACIEP ?US;A FKLO= Zj=\\<>?@7K? 7KI9?US;ACF9LO=TJH=\\<>?@Q[I9A LOM]Sf_ S>= ?US>?|PK= ZpMO?@=BP$F`_ »Š_^?@= I98;osACIEP$8cQ[IE8cLN79PK=BP S>:9ACSz?US;ACF9LO=q=\\ipS@<;ABZj=\\<@S>?tJ^<>=\\a€=c< ?@Q LOMgP JK<>=BP^MN8cS;A FKLN=$R$79?@MN8 Vy?US;A FKLO= MOI`S@<>QuZj=\\<@S>?|S>:9=†RGQ <>=h8\\Q[l I9M]S>MOZj= QCa 8\\LNAC?@?@Mg8\\A LyA IEPqF9AC<>Qpˆ`79=h?USf_^LO= ? V 7KI9?US;ACF9LO=}=\\ipS@<;ABZj=\\<@S>?‰S>:9=Š<>Q[RmACIpS>MN8Š?USf_^LO= ? =\\i^J^<>= ?@?@MOI9l†QCZ[=\\<@S = RGQCY S>MOQ I9? V^A I9P$79IK?US;A FKLN=WMOI`S@<>QuZj=\\<@S>?ŠS>:9=TRGQ <>=TRs_^?US>Mg8\\A LEACIEP$MORGJ^<>= ?UY ?@MOQ I9MO?US>MN8G<>Q RmA I`S>MN8 XwQC<>op?b˜”P^MO?>8\\79?@?@=BP2MNI-®]aB«C ̄wA I9P-® 1⁄4 ­u ̄™;d6 QC<>= A Review of Factors Affecting Music Recommender Success <>=B8c= I`S>LO_`VyM]Ss:EA ?3FH=\\= I ?@:KQuXWI2S>:EACS3S>:K=$LO= Zj=\\L{QCaWA l l <>= ?@?@MOZj= IK= ?@? 8\\QC<@<>= LNACS>= ?mXWM]S>: R 79?@MN8 A LW?US~_KLO=[V}XWMOS>:-RGQ <>=1ACl[lC<>= ?@?@MOZj=rJH= Q[JKLN= FH= MOIKl RGQ <>=&LOMOoj= L]_+S>Q = I ¤fQu_-:9= AuZ`_+RG=\\S;ACLhQC<:9AC<;P-<>Qp8>o+R$7KY ?@MN8G®]aB«C ̄”d © S>7EPKMO= ?†Q a}PKM]¡…=\\<>=\\IpSh8\\7KL]S>7K<;A L—l <>Q[7KJ9? V…?@:KQuX PKM]¡…=\\<>= I`S PKMO?US@<>MOF97^Y S>MOQ[IK?QCasR$79?@MN8 ACLhJ^<>=\\a€=c<>= IE8\\=\\? d’ƒ9Q <=\\iKA RGJKLO=[V jACJEA IK= ?@=2A[PKQCY LO= ?>8\\=\\IpS>?W:9AuZ[=tAz:9MOl[:K=\\<WLOMNo[= LOMN:KQ`QpP Q a—= I ¤fQB_KMOIKlq8\\LNA ?@?@MN8 ACL—QC<x¤@A R 79?@MN8{S>:EACI S>:K= M]<|hRG=\\<>MN8 ACIq8\\Q 79I`S>=\\<>JEAu<@S>?W® 1 1C ̄”d|vw:9=}?US>7EP^_ ACLN?@Q 8\\Q IE8\\LO79PK=BP&S>:EAuShS>:K=\\<>", "title": "" }, { "docid": "1832e7fe9b0d2f034c22777a6783cfde", "text": "Recently, Monte-Carlo Tree Search (MCTS) has become a popular approach for intelligent play in games. Amongst others, it is successfully used in most state-of-the-art Go programs. To improve the playing strength of these Go programs any further, many parameters dealing with MCTS should be fine-tuned. In this paper, we propose to apply the Cross-Entropy Method (CEM) for this task. The method is comparable to Estimation-of-Distribution Algorithms (EDAs), a new area of evolutionary computation. We tested CEM by tuning various types of parameters in our Go program MANGO. The experiments were performed in matches against the open-source program GNU GO. They revealed that a program with the CEM-tuned parameters played better than without. Moreover, MANGO plus CEM outperformed the regular MANGO for various time settings and board sizes. From the results we may conclude that parameter tuning by CEM genuinely improved the playing strength of MANGO, for various time settings. This result may be generalized to other game engines using MCTS.", "title": "" }, { "docid": "b50918f904d08f678cb153b16b052344", "text": "According to Earnshaw's theorem, the ratio between axial and radial stiffness is always -2 for pure permanent magnetic configurations with rotational symmetry. Using highly permeable material increases the force and stiffness of permanent magnetic bearings. However, the stiffness in the unstable direction increases more than the stiffness in the stable direction. This paper presents an analytical approach to calculating the axial force and the axial and radial stiffnesses of attractive passive magnetic bearings (PMBs) with back iron. The investigations are based on the method of image charges and show in which magnet geometries lead to reasonable axial to radial stiffness ratios. Furthermore, the magnet dimensions achieving maximum force and stiffness per magnet volume are outlined. Finally, the calculation method was applied to the PMB of a magnetically levitated fan, and the analytical results were compared with a finite element analysis.", "title": "" }, { "docid": "d03abae94005c27aa46c66e1cdc77b23", "text": "The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6-8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multi-modality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement.", "title": "" }, { "docid": "5e9669e422bbbb2c964e13ebf65703af", "text": "Behavioral problems are a major source of poor welfare and premature mortality in companion dogs. Previous studies have demonstrated associations between owners' personality and psychological status and the prevalence and/or severity of their dogs' behavior problems. However, the mechanisms responsible for these associations are currently unknown. Other studies have detected links between the tendency of dogs to display behavior problems and their owners' use of aversive or confrontational training methods. This raises the possibility that the effects of owner personality and psychological status on dog behavior are mediated via their influence on the owner's choice of training methods. We investigated this hypothesis in a self-selected, convenience sample of 1564 current dog owners using an online battery of questionnaires designed to measure, respectively, owner personality, depression, emotion regulation, use of aversive/confrontational training methods, and owner-reported dog behavior. Multivariate linear and logistic regression analyses identified modest, positive associations between owners' use of aversive/confrontational training methods and the prevalence/severity of the following dog behavior problems: owner-directed aggression, stranger-directed aggression, separation problems, chasing, persistent barking, and house-soiling (urination and defecation when left alone). The regression models also detected modest associations between owners' low scores on four of the 'Big Five' personality dimensions (Agreeableness, Emotional Stability, Extraversion & Conscientiousness) and their dogs' tendency to display higher rates of owner-directed aggression, stranger-directed fear, and/or urination when left alone. The study found only weak evidence to support the hypothesis that these relationships between owner personality and dog behavior were mediated via the owners' use of punitive training methods, but it did detect a more than five-fold increase in the use of aversive/confrontational training techniques among men with moderate depression. Further research is needed to clarify the causal relationship between owner personality and psychological status and the behavioral problems of companion dogs.", "title": "" }, { "docid": "6d5e80293931396556cf5fbe64e9c2d2", "text": "Rotors of electrical high speed machines are subject to high stress, limiting the rated power of the machines. This paper describes the design process of a high-speed rotor of a Permanent Magnet Synchronous Machine (PMSM) for a rated power of 10kW at 100,000 rpm. Therefore, at the initial design the impact of the rotor radius to critical parameters is analyzed analytically. In particular, critical parameters are mechanical stress due to high centrifugal forces and natural bending frequencies. Furthermore, air friction losses, heating the rotor and the stator additionally, are no longer negligible compared to conventional machines and must be considered in the design process. These mechanical attributes are controversial to the electromagnetic design, increasing the effective magnetic air gap, for example. Thus, investigations are performed to achieve sufficient mechanical strength without a significant reduction of air gap flux density or causing thermal problems. After initial design by means of analytical estimations, an optimization of rotor geometry and materials is performed by means of the finite element method (FEM).", "title": "" }, { "docid": "18dbbf0338d138f71a57b562883f0677", "text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "807cd6adc45a2adb7943c5a0fb5baa94", "text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.", "title": "" }, { "docid": "6bbfac62a2f99c028c7df3b586b41f68", "text": "Depression is a common mental health condition for which many mobile apps aim to provide support. This review aims to identify self-help apps available exclusively for people with depression and evaluate those that offer cognitive behavioural therapy (CBT) or behavioural activation (BA). One hundred and seventeen apps have been identified after searching both the scientific literature and the commercial market. 10.26% (n = 12) of these apps identified through our search offer support that seems to be consistent with evidence-based principles of CBT or BA. Taking into account the non existence of effectiveness/efficacy studies, and the low level of adherence to the core ingredients of the CBT/BA models, the utility of these CBT/BA apps are questionable. The usability of reviewed apps is highly variable and they rarely are accompanied by explicit privacy or safety policies. Despite the growing public demand, there is a concerning lack of appropiate CBT or BA apps, especially from a clinical and legal point of view. The application of superior scientific, technological, and legal knowledge is needed to improve the development, testing, and accessibility of apps for people with depression.", "title": "" }, { "docid": "2802c89f5b943ea0bee357b36d072ada", "text": "Motivation: Alzheimer’s disease (AD) is an incurable neurological condition which causes progressive mental deterioration, especially in the elderly. The focus of our work is to improve our understanding about the progression of AD. By finding brain regions which degenerate together in AD we can understand how the disease progresses during the lifespan of an Alzheimer’s patient. Our aim is to work towards not only achieving diagnostic performance but also generate useful clinical information. Objective: The main objective of this study is to find important sub regions of the brain which undergo neuronal degeneration together during AD using deep learning algorithms and other machine learning techniques. Methodology: We extract 3D brain region patches from 100 subject MRI images using a predefined anatomical atlas. We have devised an ensemble of pair predictors which use 3D convolutional neural networks to extract salient features for AD from a pair of regions in the brain. We then train them in a supervised manner and use a boosting algorithm to find the weightage of each pair predictor towards the final classification. We use this weightage as the strength of correlation and saliency between the two input sub regions of the pair predictor. Result: We were able to retrieve sub regional association measures for 100 sub region pairs using the proposed method. Our approach was able to automatically learn sub regional association structure in AD directly from images. Our approach also provides an insight into computational methods for demarcating effects of AD from effects of ageing (and other neurological diseases) on our neuroanatomy. Our meta classifier gave a final accuracy of 81.79% for AD classification relative to healthy subjects using a single imaging modality dataset.", "title": "" }, { "docid": "5140cad8babfc17c660bf9ca5dfa5fb6", "text": "In this paper, the fundamental problem of distribution and proactive caching of computing tasks in fog networks is studied under latency and reliability constraints. In the proposed scenario, computing can be executed either locally at the user device or offloaded to an edge cloudlet. Moreover, cloudlets exploit both their computing and storage capabilities by proactively caching popular task computation results to minimize computing latency. To this end, a clustering method to group spatially proximate user devices with mutual task popularity interests and their serving cloudlets is proposed. Then, cloudlets can proactively cache the popular tasks' computations of their cluster members to minimize computing latency. Additionally, the problem of distributing tasks to cloudlets is formulated as a matching game in which a cost function of computing delay is minimized under latency and reliability constraints. Simulation results show that the proposed scheme guarantees reliable computations with bounded latency and achieves up to 91% decrease in computing latency as compared to baseline schemes.", "title": "" }, { "docid": "b77363417b2e5db93d9f1e0447bd1932", "text": "UK Government regularly applies challenging strategic targets to the construction industry, chief amongst these are requirements for more rapid project delivery processes and consistent improvements to the time predictability aspects of on-site construction delivery periods. Latest industry KPI data has revealed a recent increase across measures of time predictability, however more than half of UK construction projects continue to exceed agreed time schedules. The aim of this research was to investigate the diffusion of 4D BIM innovation as adoption of this innovation is seen as a potential solution in response to these targets of construction time predictability. Through purposive sampling, a quantitative survey was undertaken using an online questionnaire that measured 4D BIM innovation adoption using accepted diffusion research methods. These included an exploration of several perceived attributes including compatibility, complexity, observability and the relative advantages of 4D BIM innovation in comparison against conventional functions of construction planning and against stages of the construction planning processes. Descriptive and inferential analysis of the data addresses how the benefits are being realised and explore reasons for adoption or rejection decisions of this innovation. Results indicate an increasing rate of 4D BIM innovation adoption and reveal the typical time lag between awareness and first use.", "title": "" }, { "docid": "d9176322068e6ca207ae913b1164b3da", "text": "Topic Detection and Tracking (TDT) is a variant of classiication in which the classes are not known or xed in advance. Consider for example an incoming stream of news articles or email messages that are to be classiied by topic; new classes must be created as new topics arise. The problem is a challenging one for machine learning. Instances of new topics must be recognized as not belonging to any of the existing classes (detection), and instances of old topics must be correctly classiied (tracking)|often with extremely little training data per class. This paper proposes a new approach to TDT based on probabilis-tic, generative models. Strong statistical techniques are used to address the many challenges: hierarchical shrinkage for sparse data, statistical \\garbage collection\" for new event detection, clustering in time to separate the diierent events of a common topic, and deterministic anneal-ing for creating the hierarchy. Preliminary experimental results show promise.", "title": "" }, { "docid": "cd1cfbdae08907e27a4e1c51e0508839", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" }, { "docid": "c633668d5933118db60ea1c9b79333ea", "text": "A robot exoskeleton which is inspired by the human musculoskeletal system has been developed for lower limb rehabilitation. The device was manufactured using a novel technique employing 3D printing and fiber reinforcement to make one-of-a-kind form fitting human-robot connections. Actuation of the exoskeleton is achieved using PMAs (pneumatic air muscles) and cable actuation to give the system inherent compliance while maintaining a very low mass. The entire system was modeled including a new hybrid model for PMAs. Simulation and experimental results for a force and impedance based trajectory tracking controller demonstrate the feasibility for using the HuREx system for gait and rehabilitation training.", "title": "" } ]
scidocsrr
943b6a72a22a46e6175f0db92c920e72
Popularity and Quality in Social News Aggregators: A Study of Reddit and Hacker News
[ { "docid": "c77fad43abe34ecb0a451a3b0b5d684e", "text": "Search engine click logs provide an invaluable source of relevance information, but this information is biased. A key source of bias is presentation order: the probability of click is influenced by a document's position in the results page. This paper focuses on explaining that bias, modelling how probability of click depends on position. We propose four simple hypotheses about how position bias might arise. We carry out a large data-gathering effort, where we perturb the ranking of a major search engine, to see how clicks are affected. We then explore which of the four hypotheses best explains the real-world position effects, and compare these to a simple logistic regression model. The data are not well explained by simple position models, where some users click indiscriminately on rank 1 or there is a simple decay of attention over ranks. A â cascade' model, where users view results from top to bottom and leave as soon as they see a worthwhile document, is our best explanation for position bias in early ranks", "title": "" }, { "docid": "957170b015e5acd4ab7ce076f5a4c900", "text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "title": "" }, { "docid": "437d9a2146e05be85173b14176e4327c", "text": "Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.", "title": "" }, { "docid": "b5a809969347e24eb0192c04ef6dd21f", "text": "News articles are extremely time sensitive by nature. There is also intense competition among news items to propagate as widely as possible. Hence, the task of predicting the popularity of news items on the social web is both interesting and challenging. Prior research has dealt with predicting eventual online popularity based on early popularity. It is most desirable, however, to predict the popularity of items prior to their release, fostering the possibility of appropriate decision making to modify an article and the manner of its publication. In this paper, we construct a multi-dimensional feature space derived from properties of an article and evaluate the efficacy of these features to serve as predictors of online popularity. We examine both regression and classification algorithms and demonstrate that despite randomness in human behavior, it is possible to predict ranges of popularity on twitter with an overall 84% accuracy. Our study also serves to illustrate the differences between traditionally prominent sources and those immensely popular on the social web.", "title": "" } ]
[ { "docid": "390f92430582d13bc2b22a9047ea01a6", "text": "This paper considers a proportional hazards model, which allows one to examine the extent to which covariates interact nonlinearly with an exposure variable, for analysis of lifetime data. A local partial-likelihood technique is proposed to estimate nonlinear interactions. Asymptotic normality of the proposed estimator is established. The baseline hazard function, the bias and the variance of the local likelihood estimator are consistently estimated. In addition, a one-step local partial-likelihood estimator is presented to facilitate the computation of the proposed procedure and is demonstrated to be as efficient as the fully iterated local partial-likelihood estimator. Furthermore, a penalized local likelihood estimator is proposed to select important risk variables in the model. Numerical examples are used to illustrate the effectiveness of the proposed procedures.", "title": "" }, { "docid": "5c90f5a934a4d936257467a14a058925", "text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex", "title": "" }, { "docid": "e226452a288c3067ef8ee613f0b64090", "text": "Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.", "title": "" }, { "docid": "6eed03674521ecf9a558ab0059fc167f", "text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.", "title": "" }, { "docid": "0232c4cfec6d4ac0339104c563506245", "text": "We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.", "title": "" }, { "docid": "7973cb32f19b61b0cc88671e4939e32b", "text": "Trolling behaviors are extremely diverse, varying by context, tactics, motivations, and impact. Definitions, perceptions of, and reactions to online trolling behaviors vary. Since not all trolling is equal or deviant, managing these behaviors requires context sensitive strategies. This paper describes appropriate responses to various acts of trolling in context, based on perceptions of college students in North America. In addition to strategies for dealing with deviant trolling, this paper illustrates the complexity of dealing with socially and politically motivated trolling.", "title": "" }, { "docid": "72c0fecdbcc27b6af98373dc3c03333b", "text": "The amino acid sequence of the heavy chain of Bombyx mori silk fibroin was derived from the gene sequence. The 5,263-residue (391-kDa) polypeptide chain comprises 12 low-complexity \"crystalline\" domains made up of Gly-X repeats and covering 94% of the sequence; X is Ala in 65%, Ser in 23%, and Tyr in 9% of the repeats. The remainder includes a nonrepetitive 151-residue header sequence, 11 nearly identical copies of a 43-residue spacer sequence, and a 58-residue C-terminal sequence. The header sequence is homologous to the N-terminal sequence of other fibroins with a completely different crystalline region. In Bombyx mori, each crystalline domain is made up of subdomains of approximately 70 residues, which in most cases begin with repeats of the GAGAGS hexapeptide and terminate with the GAAS tetrapeptide. Within the subdomains, the Gly-X alternance is strict, which strongly supports the classic Pauling-Corey model, in which beta-sheets pack on each other in alternating layers of Gly/Gly and X/X contacts. When fitting the actual sequence to that model, we propose that each subdomain forms a beta-strand and each crystalline domain a two-layered beta-sandwich, and we suggest that the beta-sheets may be parallel, rather than antiparallel, as has been assumed up to now.", "title": "" }, { "docid": "1c19d0b156673e70544fe93154f1ae33", "text": "Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing.", "title": "" }, { "docid": "830abfc28745f469cd24bb730111afcb", "text": "User interface (UI) is point of interaction between user and computer software. The success and failure of a software application depends on User Interface Design (UID). Possibility of using a software, easily using and learning are issues influenced by UID. The UI is significant in designing of educational software (e-Learning). Principles and concepts of learning should be considered in addition to UID principles in UID for e-learning. In this regard, to specify the logical relationship between education, learning, UID and multimedia at first we readdress the issues raised in previous studies. It is followed by examining the principle concepts of e-learning and UID. Then, we will see how UID contributes to e-learning through the educational software built by authors. Also we show the way of using UI to improve learning and motivating the learners and to improve the time efficiency of using e-learning software. Keywords—e-Learning, User Interface Design, Self learning, Educational Multimedia", "title": "" }, { "docid": "8848ddd97501ff8aa5e571852e7fb447", "text": "Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems. They must use little energy and be robust to environmental conditions, while also providing common services that make it easy to write applications. In TinyOS, the current state of the art in sensor node operating systems, reusable components implement common services, but each node runs a single statically-linked system image, making it hard to run multiple applications or incrementally update applications. We present SOS, a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum. SOS consists of dynamically-loaded modules and a common kernel, which implements messaging, dynamic memory, and module loading and unloading, among other services. Modules are not processes: they are scheduled cooperatively and there is no memory protection. Nevertheless, the system protects against common module bugs using techniques such as typed entry points, watchdog timers, and primitive resource garbage collection. Individual modules can be added and removed with minimal system interruption. We describe SOS's design and implementation, discuss tradeoffs, and compare it with TinyOS and with the Maté virtual machine. Our evaluation shows that despite the dynamic nature of SOS and its higher-level kernel interface, its long term total usage nearly identical to that of systems such as Matè and TinyOS.", "title": "" }, { "docid": "a934b69f281d0bb693982fbc48a4c677", "text": "We investigate the impact of preextracting and tokenizing bigram collocations on topic models. Using extensive experiments on four different corpora, we show that incorporating bigram collocations in the document representation creates more parsimonious models and improves topic coherence. We point out some problems in interpreting test likelihood and test perplexity to compare model fit, and suggest an alternate measure that penalizes model complexity. We show how the Akaike information criterion is a more appropriate measure, which suggests that using a modest number (up to 1000) of top-ranked bigrams is the optimal topic modelling configuration. Using these 1000 bigrams also results in improved topic quality over unigram tokenization. Further increases in topic quality can be achieved by using up to 10,000 bigrams, but this is at the cost of a more complex model. We also show that multiword (bigram and longer) named entities give consistent results, indicating that they should be represented as single tokens. This is the first work to explicitly study the effect of n-gram tokenization on LDA topic models, and the first work to make empirical recommendations to topic modelling practitioners, challenging the standard practice of unigram-based tokenization.", "title": "" }, { "docid": "df2be33740334d9e9db5d9f2911153ed", "text": "Mobile devices such as smartphones and tablets offer great new possibilities for the creation of 3D games and virtual reality environments. However, interaction with objects in these virtual worlds is often difficult -- for example due to the devices' small form factor. In this paper, we define different 3D visualization concepts and evaluate related interactions such as navigation and selection of objects. Detailed experiments with a smartphone and a tablet illustrate the advantages and disadvantages of the various 3D visualization concepts. Our results provide new insight with respect to interaction and highlight important aspects for the design of interactive virtual environments on mobile devices and related applications -- especially for mobile 3D gaming.", "title": "" }, { "docid": "8efee8d7c3bf229fa5936209c43a7cff", "text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.", "title": "" }, { "docid": "87a04076b2137b67d6f04172e7def48b", "text": "An architecture for low-noise spatial cancellation of co-channel interferer (CCI) at RF in a digital beamforming (DBF)/MIMO receiver (RX) array is presented. The proposed RF cancellation can attenuate CCI prior to the ADC in a DBF/MIMO RX array while preserving a field-of-view (FoV) in each array element, enabling subsequent DSP for multi-beamforming. A novel hybrid-coupler/polyphase-filter based input coupling scheme that simplifies spatial selection of CCI and enables low-noise cancellation is described. A 4-element 10GHz prototype is implemented in 65nm CMOS that achieves >20dB spatial cancellation of CCI while adding <;1.5dB output noise.", "title": "" }, { "docid": "324bbe1712342fcdbc29abfbebfaf29c", "text": "Non-interactive zero-knowledge proofs are a powerful cryptographic primitive used in privacypreserving protocols. We design and build C∅C∅, the first system enabling developers to build efficient, composable, non-interactive zero-knowledge proofs for generic, user-defined statements. C∅C∅ extends state-of-the-art SNARK constructions by applying known strengthening transformations to yield UC-composable zero-knowledge proofs suitable for modular use in larger cryptographic protocols. To attain fast practical performance, C∅C∅ includes a library of several “SNARK-friendly” cryptographic primitives. These primitives are used in the strengthening transformations in order to reduce the overhead of achieving composable security. Our open-source library of optimized arithmetic circuits for these functions are up to 40× more efficient than standard implementations and are thus of independent interest for use in other NIZK projects. Finally, we evaluate C∅C∅ on applications such as anonymous credentials, private smart contracts, and nonoutsourceable proof-of-work puzzles and demonstrate 5× to 8× speedup in these application settings compared to naive implementations.", "title": "" }, { "docid": "5f9cd16a420b2f6b04e504d2b2dae111", "text": "This paper addresses on-chip solar energy harvesting and proposes a circuit that can be employed to generate high voltages from integrated photodiodes. The proposed circuit uses a switched-inductor approach to avoid stacking photodiodes to generate high voltages. The effect of parasitic photodiodes present in integrated circuits (ICs) is addressed and a solution to minimize their impact is presented. The proposed circuit employs two switch transistors and two off-chip components: an inductor and a capacitor. A theoretical analysis of a switched-inductor dc-dc converter is carried out and a mathematical model of the energy harvester is developed. Measurements taken from a fabricated IC are presented and shown to be in good agreement with hardware measurements. Measurement results show that voltages of up to 2.81 V (depending on illumination and loading conditions) can be generated from a single integrated photodiode. The energy harvester circuit achieves a maximum conversion efficiency of 59%.", "title": "" }, { "docid": "5859379f3c4c5a7186c9dc8c85e1e384", "text": "Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.", "title": "" }, { "docid": "9d867cf4f8e5456e3b01c0768bd1dfaa", "text": "This paper introduces a Projected Principal Component Analysis (Projected-PCA), which employees principal component analysis to the projected (smoothed) data matrix onto a given linear space spanned by covariates. When it applies to high-dimensional factor analysis, the projection removes noise components. We show that the unobserved latent factors can be more accurately estimated than the conventional PCA if the projection is genuine, or more precisely, when the factor loading matrices are related to the projected linear space. When the dimensionality is large, the factors can be estimated accurately even when the sample size is finite. We propose a flexible semi-parametric factor model, which decomposes the factor loading matrix into the component that can be explained by subject-specific covariates and the orthogonal residual component. The covariates' effects on the factor loadings are further modeled by the additive model via sieve approximations. By using the newly proposed Projected-PCA, the rates of convergence of the smooth factor loading matrices are obtained, which are much faster than those of the conventional factor analysis. The convergence is achieved even when the sample size is finite and is particularly appealing in the high-dimension-low-sample-size situation. This leads us to developing nonparametric tests on whether observed covariates have explaining powers on the loadings and whether they fully explain the loadings. The proposed method is illustrated by both simulated data and the returns of the components of the S&P 500 index.", "title": "" }, { "docid": "21e9a263934e09654d3b5500fb39e362", "text": "BACKGROUND\nOlder people complain of difficulties in recalling telephone numbers and being able to dial them in the correct order. This study examined the developmental trend of verbal forward digit span across adulthood and aging in a Spanish population, as an index of one of the components of Baddeley’s working memory model—the phonological loop—, which illustrates these two aspects.\n\n\nMETHOD\nA verbal digit span was administered to an incidental sample of 987 participants ranging from 35 to 90 years old. The maximum length was defined that participants could recall of at least two out of three series in the same order as presented with no errors. Demographic variables of gender and educational level were also examined.\n\n\nRESULTS\nThe ANOVA showed that the three main factors (age group, gender and educational level) were significant, but none of the interactions was. Verbal forward digit span decreases during the lifespan, but gender and educational level affect it slightly.\n\n\nCONCLUSION\nPhonological loop is affected by age. The verbal forward digit span in this study is generally lower than the one reported in other studies.", "title": "" }, { "docid": "40a0e4f114b066ef7c090517a6befad5", "text": "Utility asset managers and engineers are concerned about the life and reliability of their power transformers which depends on the continued life of the paper insulation. The ageing rate of the paper is affected by water, oxygen and acids. Traditionally, the ageing rate of paper has been studied in sealed vessels however this approach does not allow the possibility to assess the affect of oxygen on paper with different water content. The ageing rate of paper has been studied for dry paper in air (excess oxygen). In these experiments we studied the ageing rate of Kraft and thermally upgraded Kraft paper in medium and high oxygen with varying water content. Furthermore, the oxygen content of the oil in sealed vessels is low which represents only sealed transformers. The ageing rate of the paper has not been determined for free breathing transformers with medium or high oxygen content and for different wetness of paper. In these ageing experiments the water and oxygen content was controlled using a special test rig to compare the ageing rate to previous work and to determine the ageing effect of paper by combining temperature, water content of paper and oxygen content of the oil. We found that the ageing rate of paper with the same water content increased with oxygen content in the oil. Hence, new life curves were developed based on the water content of the paper and the oxygen content of the oil.", "title": "" } ]
scidocsrr
9edfd93c8767e9298d8c03a834e1a49a
WADaR: Joint Wrapper and Data Repair
[ { "docid": "dc6aafe2325dfdea5e758a30c90d8940", "text": "When a query is submitted to a search engine, the search engine returns a dynamically generated result page containing the result records, each of which usually consists of a link to and/or snippet of a retrieved Web page. In addition, such a result page often also contains information irrelevant to the query, such as information related to the hosting site of the search engine and advertisements. In this paper, we present a technique for automatically producing wrappers that can be used to extract search result records from dynamically generated result pages returned by search engines. Automatic search result record extraction is very important for many applications that need to interact with search engines such as automatic construction and maintenance of metasearch engines and deep Web crawling. The novel aspect of the proposed technique is that it utilizes both the visual content features on the result page as displayed on a browser and the HTML tag structures of the HTML source file of the result page. Experimental results indicate that this technique can achieve very high extraction accuracy.", "title": "" }, { "docid": "4a53c792868e971cddfee8210f7eafb6", "text": "We present an unsupervised approach for harvesting the data exposed by a set of structured and partially overlapping data-intensive web sources. Our proposal comes within a formal framework tackling two problems: the data extraction problem, to generate extraction rules based on the input websites, and the data integration problem, to integrate the extracted data in a unified schema. We introduce an original algorithm, WEIR, to solve the stated problems and formally prove its correctness. WEIR leverages the overlapping data among sources to make better decisions both in the data extraction (by pruning rules that do not lead to redundant information) and in the data integration (by reflecting local properties of a source over the mediated schema). Along the way, we characterize the amount of redundancy needed by our algorithm to produce a solution, and present experimental results to show the benefits of our approach with respect to existing solutions.", "title": "" } ]
[ { "docid": "b3c947eb12abdc0abf7f3bc0de9e74fc", "text": "This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on \"minimum waiting time\" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators.", "title": "" }, { "docid": "55694b963cde47e9aecbeb21fb0e79cf", "text": "The rise of Uber as the global alternative taxi operator has attracted a lot of interest recently. Aside from the media headlines which discuss the new phenomenon, e.g. on how it has disrupted the traditional transportation industry, policy makers, economists, citizens and scientists have engaged in a discussion that is centred around the means to integrate the new generation of the sharing economy services in urban ecosystems. In this work, we aim to shed new light on the discussion, by taking advantage of a publicly available longitudinal dataset that describes the mobility of yellow taxis in New York City. In addition to movement, this data contains information on the fares paid by the taxi customers for each trip. As a result we are given the opportunity to provide a first head to head comparison between the iconic yellow taxi and its modern competitor, Uber, in one of the world’s largest metropolitan centres. We identify situations when Uber X, the cheapest version of the Uber taxi service, tends to be more expensive than yellow taxis for the same journey. We also demonstrate how Uber’s economic model effectively takes advantage of well known patterns in human movement. Finally, we take our analysis a step further by proposing a new mobile application that compares taxi prices in the city to facilitate traveller’s taxi choices, hoping to ultimately to lead to a reduction of commuter costs. Our study provides a case on how big datasets that become public can improve urban services for consumers by offering the opportunity for transparency in economic sectors that lack up to date regulations.", "title": "" }, { "docid": "80b999a5c44d87cd3464facb6eea6bb8", "text": "The aim of this study was to assess the efficacy of cognitive training, specifically computerized cognitive training (CCT) and virtual reality cognitive training (VRCT), programs for individuals living with mild cognitive impairment (MCI) or dementia and therefore at high risk of cognitive decline. After searching a range of academic databases (CINHAL, PSYCinfo, and Web of Science), the studies evaluated (N = 16) were categorized as CCT (N = 10), VRCT (N = 3), and multimodal interventions (N = 3). Effect sizes were calculated, but a meta-analysis was not possible because of the large variability of study design and outcome measures adopted. The cognitive domains of attention, executive function, and memory (visual and verbal) showed the most consistent improvements. The positive effects on psychological outcomes (N = 6) were significant reductions on depressive symptoms (N = 3) and anxiety (N = 2) and improved perceived use of memory strategy (N = 1). Assessments of activities of daily living demonstrated no significant improvements (N = 8). Follow-up studies (N = 5) demonstrated long-term improvements in cognitive and psychological outcomes (N = 3), and the intervention groups showed a plateau effect of cognitive functioning compared with the cognitive decline experienced by control groups (N = 2). CCT and VRCT were moderately effective in long-term improvement of cognition for those at high risk of cognitive decline. Total intervention time did not mediate efficacy. Future research needs to improve study design by including larger samples, longitudinal designs, and a greater range of outcome measures, including functional and quality of life measures, to assess the wider effect of cognitive training on individuals at high risk of cognitive decline.", "title": "" }, { "docid": "8913c543d350ff147b9f023729f4aec3", "text": "The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.", "title": "" }, { "docid": "b3dcbd8a41e42ae6e748b07c18dbe511", "text": "There is inconclusive evidence whether practicing tasks with computer agents improves people’s performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and threeplayer coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased people’s performance as compared to interacting with people. In the three player coordination game, training with computer agents increased people’s performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving people’s skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.", "title": "" }, { "docid": "a77c113c691a61101cba1136aaf4b90c", "text": "This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus.", "title": "" }, { "docid": "7fab7940321a606b10225d14df46ce65", "text": "Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.", "title": "" }, { "docid": "288f8a2dab0c32f85c313f5a145e47a5", "text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input", "title": "" }, { "docid": "95a102f45ff856d2064d8042b0b1a0ad", "text": "Diagnosis and monitoring of health is a very important task in health care industry. Due to time constraint people are not visiting hospitals, which could lead to lot of health issues in one instant of time. Priorly most of the health care systems have been developed to predict and diagnose the health of the patients by which people who are busy in their schedule can also monitor their health at regular intervals. Many studies have shown that early prediction is the best way to cure health because early diagnosis will help and alert the patients to know the health status. In this paper, we review the various Internet of Things (IoT) enable devices and its actual implementation in the area of health care children’s, monitoring of the patients etc. Further, this paper addresses how different innovations as server, ambient intelligence and sensors can be leveraged in health care context; determines how they can facilitate economies and societies in terms of suitable development. KeywordsInternet of Things (IoT);ambient intelligence; monitoring; innovations; leveraged. __________________________________________________*****_________________________________________________", "title": "" }, { "docid": "6c5cabfa5ee5b9d67ef25658a4b737af", "text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression", "title": "" }, { "docid": "0bf227d17e76d1fb16868ff90d75e94c", "text": "The high-efficiency current-mode (CM) and voltage-mode (VM) Class-E power amplifiers (PAs) for MHz wireless power transfer (WPT) systems are first proposed in this paper and the design methodology for them is presented. The CM/VM Class-E PA is able to deliver the increasing/decreasing power with the increasing load and the efficiency maintains high even when the load varies in a wide range. The high efficiency and certain operation mode are realized by introducing an impedance transformation network with fixed components. The efficiency, output power, circuit tolerance, and robustness are all taken into consideration in the design procedure, which makes the CM and the VM Class-E PAs especially practical and efficient to real WPT systems. 6.78-MHz WPT systems with the CM and the VM Class-E PAs are fabricated and compared to that with the classical Class-E PA. The measurement results show that the output power is proportional to the load for the CM Class-E PA and is inversely proportional to the load for the VM Class-E PA. The efficiency for them maintains high, over 83%, when the load of PA varies from 10 to 100  $\\Omega$, while the efficiency of the classical Class-E is about 60% in the worst case. The experiment results validate the feasibility of the proposed design methodology and show that the CM and the VM Class-E PAs present superior performance in WPT systems compared to the traditional Class-E PA.", "title": "" }, { "docid": "3a58c1a2e4428c0b875e1202055e5b13", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "cc1ae8daa1c1c4ee2b3b4a65ef48b6f5", "text": "The use of entropy as a distance measure has several benefits. Amongst other things it provides a consistent approach to handling of symbolic attributes, real valued attributes and missing values. The approach of taking all possible transformation paths is discussed. We describe K*, an instance-based learner which uses such a measure, and results are presented which compare favourably with several machine learning algorithms.", "title": "" }, { "docid": "5d1e77b6b09ebac609f2e518b316bd49", "text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.", "title": "" }, { "docid": "ce282fba1feb109e03bdb230448a4f8a", "text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.", "title": "" }, { "docid": "ea4f56a1cc4622a102720beb5c2c189d", "text": "Food detection, classification, and analysis have been the topic of indepth studies for a variety of applications related to eating habits and dietary assessment. For the specific topic of calorie measurement of food portions with single and mixed food items, the research community needs a dataset of images for testing and training. In this paper we introduce FooDD: a Food Detection Dataset of 3000 images that offer variety of food photos taken from different cameras with different illuminations. We also provide examples of food detection using graph cut segmentation and deep learning algorithms.", "title": "" }, { "docid": "a651ae33adce719033dad26b641e6086", "text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.", "title": "" }, { "docid": "5692d2ee410c804e32ebebbcc129c8d6", "text": "Aimed at the industrial sorting technology problems, this paper researched correlative algorithm of image processing and analysis, and completed the construction of robot vision sense. the operational process was described as follows: the camera acquired image sequences of the metal work piece in the sorting region. Image sequence was analyzed to use algorithms of image pre-processing, Hough circle detection, corner detection and contour recognition. in the mean time, this paper also explained the characteristics of three main function model (image pre-processing, corner detection and contour recognition), and proposed algorithm of multi-objective center and a corner recognition. the simulated results show that the sorting system can effectively solve the sorting problem of regular geometric work piece, and accurately calculate center and edge of geometric work piece to achieve the sorting purpose.", "title": "" }, { "docid": "3ea05bc5dd97a1f76e343b42f9553662", "text": "End-to-End Large Scale Machine Learning with KeystoneML", "title": "" }, { "docid": "b480111b47176fe52cd6f9ca296dc666", "text": "We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation. On both fully and partially automatic colorization tasks, we outperform existing methods. We also explore colorization as a vehicle for self-supervised visual representation learning. Fig. 1: Our automatic colorization of grayscale input; more examples in Figs. 3 and 4.", "title": "" } ]
scidocsrr
47803985a7e20308f0ea4ac4bc1901b7
Dex: a semantic-graph differencing tool for studying changes in large code bases
[ { "docid": "b16dfd7a36069ed7df12f088c44922c5", "text": "This paper considers the problem of computing the editing distance between unordered, labeled trees. We give efficient polynomial-time algorithms for the case when one tree is a string or has a bounded number of leaves. By contrast, we show that the problem is NP -complete even for binary trees having a label alphabet of size two. keywords: Computational Complexity, Unordered trees, NP -completeness.", "title": "" } ]
[ { "docid": "9e5eb4f68046524f7a178828c5ce705f", "text": "Modularity refers to the use of common units to create product variants. As companies strive to rationalize engineering design, manufacturing, and support processes and to produce a large variety of products at a lower cost, modularity is becoming a focus. However, modularity has been treated in the literature in an abstract form and it has not been satisfactorily explored in industry. This paper aims at the development of models and solution approaches to the modularity problem for mechanical, electrical, and mixed process products (e.g., electromechanical products). To interpret various types of modularity, e.g., component-swapping, component-sharing, and bus modularity, a matrix representation of the modularity problem is presented. The decomposition approach is used to determine modules for different products. The representation and solution approaches presented are illustrated with numerous examples. The paper presents a formal approach to modularity allowing for optimal forming of modules even in the situation of insufficient availability of information. The modules determined may be shared across different products.", "title": "" }, { "docid": "ba206d552bb33f853972e3f2e70484bc", "text": "Presumptive stressful life event scale Dear Sir, in different demographic and clinical categories, which has not been attempted. I have read with considerable interest the article entitled, Presumptive stressful life events scale (PSLES)-a new stressful life events scale for use in India by Gurmeet Singh et al (April 1984 issue). I think it is a commendable effort to develop such a scale which would potentially be of use in our setting. However, the research raises several questions, which have not been dealt with in the' paper. The following are the questions or comments which ask for response from the authors: a) The mode of selection of 51 items is not mentioned. If taken arbitrarily they could suggest a bias. If selected from clinical experience, there could be a likelihood of certain events being missed. An ideal way would be to record various events from a number of persons (and patients) and then prepare a list of commonly occuring events. b) It is noteworthy that certain culture specific items as dowry, birth of daughter, etc. are included. Other relevant events as conflict with in-laws (not regarding dowry), refusal by match seeking team (difficulty in finding match for marriage) and lack of son, could be considered stressful in our setting. c) Total number of life events are a function of age, as has been mentioned in the review of literature also, hence age categorisation as under 35 and over 35 might neither be proper nor sufficient. The relationship of number of life events in different age groups would be interesting to note. d) Also, more interesting would be to examine the rank order of life events e) A briefened version would be more welcome. The authors should try to evolve a version of around about 25-30 items, which could be easily applied clinically or for research purposes. As can be seen, from items after serial number 30 (Table 4) many could be excluded. f) The cause and effect relationship is difficult to comment from the results given by the scale. As is known, 'stressfulness' of the event depends on an individuals perception of the event. That persons with higher neu-roticism scores report more events could partly be due to this. g) A minor point, Table 4 mentions Standard Deviations however S. D. has not been given for any item. Reply: I am grateful for the interest shown by Dr. Chaturvedi and his …", "title": "" }, { "docid": "9fcdce293fec576f8d287b5692c6f45b", "text": "Enabling search directly over encrypted data is a desirable technique to allow users to effectively utilize encrypted data outsourced to a remote server like cloud service provider. So far, most existing solutions focus on an honest-but-curious server, while security designs against a malicious server have not drawn enough attention. It is not until recently that a few works address the issue of verifiable designs that enable the data owner to verify the integrity of search results. Unfortunately, these verification mechanisms are highly dependent on the specific encrypted search index structures, and fail to support complex queries. There is a lack of a general verification mechanism that can be applied to all search schemes. Moreover, no effective countermeasures (e.g., punishing the cheater) are available when an unfaithful server is detected. In this work, we explore the potential of smart contract in Ethereum, an emerging blockchain-based decentralized technology that provides a new paradigm for trusted and transparent computing. By replacing the central server with a carefully-designed smart contract, we construct a decentralized privacy-preserving search scheme where the data owner can receive correct search results with assurance and without worrying about potential wrongdoings of a malicious server. To better support practical applications, we introduce fairness to our scheme by designing a new smart contract for a financially-fair search construction, in which every participant (especially in the multiuser setting) is treated equally and incentivized to conform to correct computations. In this way, an honest party can always gain what he deserves while a malicious one gets nothing. Finally, we implement a prototype of our construction and deploy it to a locally simulated network and an official Ethereum test network, respectively. The extensive experiments and evaluations demonstrate the practicability of our decentralized search scheme over encrypted data.", "title": "" }, { "docid": "fb6068d738c7865d07999052750ff6a8", "text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.", "title": "" }, { "docid": "9b1a7f811d396e634e9cc5e34a18404e", "text": "We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.", "title": "" }, { "docid": "b321f3b5e814f809221bc618b99b95bb", "text": "Abstract: Polymer processes often contain state variables whose distributions are multimodal; in addition, the models for these processes are often complex and nonlinear with uncertain parameters. This presents a challenge for Kalman-based state estimators such as the ensemble Kalman filter. We develop an estimator based on a Gaussian mixture model (GMM) coupled with the ensemble Kalman filter (EnKF) specifically for estimation with multimodal state distributions. The expectation maximization algorithm is used for clustering in the Gaussian mixture model. The performance of the GMM-based EnKF is compared to that of the EnKF and the particle filter (PF) through simulations of a polymethyl methacrylate process, and it is seen that it clearly outperforms the other estimators both in state and parameter estimation. While the PF is also able to handle nonlinearity and multimodality, its lack of robustness to model-plant mismatch affects its performance significantly.", "title": "" }, { "docid": "bd73a86a9b67ba26eeeecb2f582fd10a", "text": "Many of UCLES' academic examinations make extensive use of questions that require candidates to write one or two sentences. For example, questions often ask candidates to state, to suggest, to describe, or to explain. These questions are a highly regarded and integral part of the examinations, and are also used extensively by teachers. A system that could partially or wholly automate valid marking of short, free text answers would therefore be valuable, but until † The UCLES Group provides assessment services worldwide through three main business units. • Cambridge-ESOL (English for speakers of other languages) provides examinations in English as a foreign language and qualifications for language teachers throughout the world. • CIE (Cambridge International Examinations) provides international school examinations and international vocational awards. • OCR (Oxford, Cambridge and RSA Examinations) provides general and vocational qualifications to schools, colleges, employers, and training providers in the UK. For more information please visit http://www.ucles.org.uk", "title": "" }, { "docid": "8da6cc5c6a8a5d45fadbab8b7ca8b71f", "text": "Feature detection and description is a pivotal step in many computer vision pipelines. Traditionally, human engineered features have been the main workhorse in this domain. In this paper, we present a novel approach for learning to detect and describe keypoints from images leveraging deep architectures. To allow for a learning based approach, we collect a large-scale dataset of patches with matching multiscale keypoints. The proposed model learns from this vast dataset to identify and describe meaningful keypoints. We evaluate our model for the effectiveness of its learned representations for detecting multiscale keypoints and describing their respective support regions.", "title": "" }, { "docid": "0af9b629032ae50a2e94310abcc55aa5", "text": "We introduce novel relaxations for cardinality-constrained learning problems, including least-squares regression as a special but important case. Our approach is based on reformulating a cardinality-constrained problem exactly as a Boolean program, to which standard convex relaxations such as the Lasserre and Sherali-Adams hierarchies can be applied. We analyze the first-order relaxation in detail, deriving necessary and sufficient conditions for exactness in a unified manner. In the special case of least-squares regression, we show that these conditions are satisfied with high probability for random ensembles satisfying suitable incoherence conditions, similar to results on 1-relaxations. In contrast to known methods, our relaxations yield lower bounds on the objective, and it can be verified whether or not the relaxation is exact. If it is not, we show that randomization based on the relaxed solution offers a principled way to generate provably good feasible solutions. This property enables us to obtain high quality estimates even if incoherence conditions are not met, as might be expected in real datasets. We numerically illustrate the performance of the relaxationrandomization strategy in both synthetic and real high-dimensional datasets, revealing substantial improvements relative to 1-based methods and greedy selection heuristics. B Laurent El Ghaoui elghaoui@berkeley.edu Mert Pilanci mert@berkeley.edu Martin J. Wainwright wainwrig@berkeley.edu 1 Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA 2 Department of Electrical Engineering and Computer Sciences and Department of Statistics, University of California, Berkeley, CA, USA", "title": "" }, { "docid": "e8fb4848c8463bfcbe4a09dfeda52584", "text": "A highly efficient rectifier for wireless power transfer in biomedical implant applications is implemented using 0.18-m CMOS technology. The proposed rectifier with active nMOS and pMOS diodes employs a four-input common-gate-type capacitively cross-coupled latched comparator to control the reverse leakage current in order to maximize the power conversion efficiency (PCE) of the rectifier. The designed rectifier achieves a maximum measured PCE of 81.9% at 13.56 MHz under conditions of a low 1.5-Vpp RF input signal with a 1- k output load resistance and occupies 0.009 mm2 of core die area.", "title": "" }, { "docid": "14a8069c29f38129bc8d84b2b3d1ed16", "text": "Document similarity measures are crucial components of many text-analysis tasks, including information retrieval, document classification, and document clustering. Conventional measures are brittle: They estimate the surface overlap between documents based on the words they mention and ignore deeper semantic connections. We propose a new measure that assesses similarity at both the lexical and semantic levels, and learns from human judgments how to combine them by using machine-learning techniques. Experiments show that the new measure produces values for documents that are more consistent with people’s judgments than people are with each other. We also use it to classify and cluster large document sets covering different genres and topics, and find that it improves both classification and clustering performance.", "title": "" }, { "docid": "da237e14a3a9f6552fc520812073ee6c", "text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.", "title": "" }, { "docid": "e93f468ac0da8e64037ca47aff55deb2", "text": "Urban areas are the primary habitat for a majority of the global population. The development of cities not only entails a fundamental change in human settlement patterns but also a dramatic transformation of the physical environment. Thus, urban areas and their development are at the centre of all discussions on sustainability and/or sustainable development. This review essay introduces the notion of Urban Metabolism (UM), a term that provides a conceptual framework to study how a city functions, and hence, a way to address the sustainability issue of a city. Due to the significance and scope of the subject, the notion of UM is interpreted and thus approached differently across diverse disciplines from both the natural and social science fields. In order to comprehend the commonalities and controversies between them, the present review also briefly introduces the historical roots of the term. This review reveals the increasing significance of a rich and rapidly evolving field of research on the metabolism of urban areas.", "title": "" }, { "docid": "bc2e5599a911dd84e303ac5ebd029f1a", "text": "A simultaneous X/Ka feed system has been designed to cater for reflector antennas with a F/D ratio of 0.8. This work is an extension of the successful design of the initial X/Ka feed system that was designed for reflectors with a F/D ratio of 0.65. Although simple in concept, this move from F/D=0.65 to F/D=0.8 is not an easy task from a design point of view.", "title": "" }, { "docid": "bd18e4473cba642c5bea1bddc418f6c2", "text": "This paper presents Smart Home concepts for Internet of Things (IoT) technologies that will make life at home more convenient. In this paper, we first describe the overall design of a low-cost Smart Refrigerator built with Raspberry Pi. Next, we explain two sensors controlling each camera, which are hooked up to our Rasberry Pi board. We further show how the user can use the Graphical User Interface (GUI) to interact with our system. With this Smart Home and Internet of Things technology, a user-friendly graphical user interface, prompt data synchronization among multiple devices, and real-time actual images captured from the refrigerator, our system can easily assist a family to reduce food waste.", "title": "" }, { "docid": "f50f7daeac03fbd41f91ff48c054955b", "text": "Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.", "title": "" }, { "docid": "18c517f26bceeb7930a4418f7a6b2f30", "text": "BACKGROUND\nWe aimed to study whether pulmonary hypertension (PH) and elevated pulmonary vascular resistance (PVR) could be predicted by conventional echo Doppler and novel tissue Doppler imaging (TDI) in a population of chronic obstructive pulmonary disease (COPD) free of LV disease and co-morbidities.\n\n\nMETHODS\nEchocardiography and right heart catheterization was performed in 100 outpatients with COPD. By echocardiography the time-integral of the TDI index, right ventricular systolic velocity (RVSmVTI) and pulmonary acceleration-time (PAAcT) were measured and adjusted for heart rate. The COPD patients were randomly divided in a derivation (n = 50) and a validation cohort (n = 50).\n\n\nRESULTS\nPH (mean pulmonary artery pressure (mPAP) ≥ 25mmHg) and elevated PVR ≥ 2Wood unit (WU) were predicted by satisfactory area under the curve for RVSmVTI of 0.93 and 0.93 and for PAAcT of 0.96 and 0.96, respectively. Both echo indices were 100% feasible, contrasting 84% feasibility for parameters relying on contrast enhanced tricuspid-regurgitation. RVSmVTI and PAAcT showed best correlations to invasive measured mPAP, but less so to PVR. PAAcT was accurate in 90- and 78% and RVSmVTI in 90- and 84% in the calculation of mPAP and PVR, respectively.\n\n\nCONCLUSIONS\nHeart rate adjusted-PAAcT and RVSmVTI are simple and reproducible methods that correlate well with pulmonary artery pressure and PVR and showed high accuracy in detecting PH and increased PVR in patients with COPD. Taken into account the high feasibility of these two echo indices, they should be considered in the echocardiographic assessment of COPD patients.", "title": "" }, { "docid": "fde9d6a4fc1594a1767e84c62c7d3b89", "text": "This paper explores the effects of emotions embedded in a seller review on its perceived helpfulness to readers. Drawing on frameworks in literature on emotion and cognitive processing, we propose that over and above a well-known negativity bias, the impact of discrete emotions in a review will vary, and that one source of this variance is reader perceptions of reviewers’ cognitive effort. We focus on the roles of two distinct, negative emotions common to seller reviews: anxiety and anger. In the first two studies, experimental methods were utilized to identify and explain the differential impact of anxiety and anger in terms of perceived reviewer effort. In the third study, seller reviews from Yahoo! Shopping web sites were collected to examine the relationship between emotional review content and helpfulness ratings. Our findings demonstrate the importance of examining discrete emotions in online word-of-mouth, and they carry important practical implications for consumers and online retailers.", "title": "" }, { "docid": "fd11fbed7a129e3853e73040cbabb56c", "text": "A digitally modulated power amplifier (DPA) in 1.2 V 0.13 mum SOI CMOS is presented, to be used as a building block in multi-standard, multi-band polar transmitters. It performs direct amplitude modulation of an input RF carrier by digitally controlling an array of 127 unary-weighted and three binary-weighted elementary gain cells. The DPA is based on a novel two-stage topology, which allows seamless operation from 800 MHz through 2 GHz, with a full-power efficiency larger than 40% and a 25.2 dBm maximum envelope power. Adaptive digital predistortion is exploited for DPA linearization. The circuit is thus able to reconstruct 21.7 dBm WCDMA/EDGE signals at 1.9 GHz with 38% efficiency and a higher than 10 dB margin on all spectral specifications. As a result of the digital modulation technique, a higher than 20.1 % efficiency is guaranteed for WCDMA signals with a peak-to-average power ratio as high as 10.8 dB. Furthermore, a 15.3 dBm, 5 MHz WiMAX OFDM signal is successfully reconstructed with a 22% efficiency and 1.53% rms EVM. A high 10-bit nominal resolution enables a wide-range TX power control strategy to be implemented, which greatly minimizes the quiescent consumption down to 10 mW. A 16.4% CDMA average efficiency is thus obtained across a > 70 dB power control range, while complying with all the spectral specifications.", "title": "" }, { "docid": "c44f971f063f8594985a98beb897464a", "text": "In recent years, multi-agent epistemic planning has received attention from both dynamic logic and planning communities. Existing implementations of multi-agent epistemic planning are based on compilation into classical planning and suffer from various limitations, such as generating only linear plans, restriction to public actions, and incapability to handle disjunctive beliefs. In this paper, we propose a general representation language for multi-agent epistemic planning where the initial KB and the goal, the preconditions and effects of actions can be arbitrary multi-agent epistemic formulas, and the solution is an action tree branching on sensing results. To support efficient reasoning in the multi-agent KD45 logic, we make use of a normal form called alternating cover disjunctive formulas (ACDFs). We propose basic revision and update algorithms for ACDFs. We also handle static propositional common knowledge, which we call constraints. Based on our reasoning, revision and update algorithms, adapting the PrAO algorithm for contingent planning from the literature, we implemented a multi-agent epistemic planner called MEPK. Our experimental results show the viability of our approach.", "title": "" } ]
scidocsrr
2c748573b4053bd311ae79c13e71a287
Shiny-phyloseq: Web application for interactive microbiome analysis with provenance tracking
[ { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" } ]
[ { "docid": "f322c2d3ab7db46feeceec2a6336cf6b", "text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios.", "title": "" }, { "docid": "de4c44363fd6bb6da7ec0c9efd752213", "text": "Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.", "title": "" }, { "docid": "d4a893a151ce4a3dee0e5fde0ba11b7b", "text": "Software-Defined Radio (SDR) technology has already cleared up passive radar applications. Nevertheless, until now, no work has pointed how this flexible radio could fully and directly exploit pulsed radar signals. This paper aims at introducing this field of study presenting not only an SDR-based radar-detector but also how it could be conceived on a low power consumption device as a tablet, which would make convenient a passive network to identify and localize aircraft as a redundancy to the conventional air traffic control in adverse situations. After a brief approach of the main features of the equipment, as well as of the developed processing script, indoor experiments took place. Their results demonstrate that the processing of pulsed radar signal allows emitters to be identified when a local database is confronted. All this commitment has contributed to a greater proposal of an Electronic Intelligence (ELINT) or Electronic Support Measures (ESM) system embedded on a tablet, presenting characteristics of portability and furtiveness. This study is suggested for the areas of Software-Defined Radio, Electronic Warfare, Electromagnetic Devices and Radar Signal Processing.", "title": "" }, { "docid": "c1349662f18e4744920c7f7db93360e7", "text": "We present an approach to learning features that represent the local geometry around a point in an unstructured point cloud. Such features play a central role in geometric registration, which supports diverse applications in robotics and 3D vision. Current state-of-the-art local features for unstructured point clouds have been manually crafted and none combines the desirable properties of precision, compactness, and robustness. We show that features with these properties can be learned from data, by optimizing deep networks that map high-dimensional histograms into low-dimensional Euclidean spaces. The presented approach yields a family of features, parameterized by dimension, that are both more compact and more accurate than existing descriptors.", "title": "" }, { "docid": "50e58087f9a02a4a2d828b9434bdea17", "text": "ÐThis paper concerns an efficient algorithm for the solution of the exterior orientation problem. Orthogonal decompositions are used to first isolate the unknown depths of feature points in the camera reference frame, allowing the problem to be reduced to an absolute orientation with scale problem, which is solved using the SVD. The key feature of this approach is the low computational cost compared to existing approaches. Index TermsÐExterior orientation, pose estimation, absolute orientation, efficient linear method.", "title": "" }, { "docid": "95a380b670afe52b86aa905d7b6e5452", "text": "Objectives To determine the effectiveness of replacing restorations considered to be the cause of an oral lichenoid lesion (oral lichenoid reaction)(OLL).Design Clinical intervention and nine-month follow up.Setting The study was carried out in the University Dental Hospital of Manchester, 1998-2002.Subjects and methods A total of 51 patients, mean age 53 (SD 13) years, who had oral lesions or symptoms suspected to be related to their dental restorations were investigated. Baseline patch tests for a series of dental materials, biopsies and photographs were undertaken. Thirty-nine out of 51 (76%) of patients had their restorations replaced.Results The clinical manifestations of OLL were variable; the majority of OLL were found to be in the molar and retro molar area of the buccal mucosa and the tongue. Twenty-seven (53%) patients had positive patch test reactions to at least one material, 24 of them for one or more mercury compound. After a mean follow up period of nine months, lesions adjacent to replaced restorations completely healed in 16 (42%) patients (10 positive and 6 negative patch tests). Improvement in signs and symptoms were found in 18 (47%) patients (11 positive and 7 negative patch tests).Conclusion OLLs may be elicited by some dental restorations. Replacing restorations adjacent to these lesions is associated with healing in the majority of cases particularly when lesions are in close contact with restorations. A patch test seems to be of limited benefit as a predictor of such reactions.", "title": "" }, { "docid": "0d2a8165acbd9413a0d1e7da9a825c93", "text": "Psychological studies of categorization often assume that all concepts are of the same general kind, and are operated on by the same kind of categorization process. In this paper, we argue against this unitary view, and for the existence of qualitatively different categorization processes. In particular, we focus on the distinction between categorizing an item by: (a) applying a category-defining rule to the item vs. (b) determining the similarity of that item to remembered exemplars of a category. We begin by characterizing rule application and similarity computations as strategies of categorization. Next, we review experimental studies that have used artificial categories and shown that differences in instructions or time pressure can lead to either rule-based categorization or similarity-based categorization. Then we consider studies that have used natural concepts and again demonstrated that categorization can be done by either rule application or similarity calculations. Lastly, we take up evidence from cognitive neuroscience relevant to the rule vs. similarity issue. There is some indirect evidence from brain-damaged patients for neurological differences between categorization based on rules vs. that based on similarity (with the former involving frontal regions, and the latter relying more on posterior areas). For more direct evidence, we present the results of a recent neuroimaging experiment, which indicates that different neural circuits are involved when people categorize items on the basis of a rule as compared with when they categorize the same items on the basis of similarity.", "title": "" }, { "docid": "16e1174454d62c69d831effce532bcad", "text": "We report on the quantitative determination of acetaminophen (paracetamol; NAPAP-d(0)) in human plasma and urine by GC-MS and GC-MS/MS in the electron-capture negative-ion chemical ionization (ECNICI) mode after derivatization with pentafluorobenzyl (PFB) bromide (PFB-Br). Commercially available tetradeuterated acetaminophen (NAPAP-d(4)) was used as the internal standard. NAPAP-d(0) and NAPAP-d(4) were extracted from 100-μL aliquots of plasma and urine with 300 μL ethyl acetate (EA) by vortexing (60s). After centrifugation the EA phase was collected, the solvent was removed under a stream of nitrogen gas, and the residue was reconstituted in acetonitrile (MeCN, 100 μL). PFB-Br (10 μL, 30 vol% in MeCN) and N,N-diisopropylethylamine (10 μL) were added and the mixture was incubated for 60 min at 30 °C. Then, solvents and reagents were removed under nitrogen and the residue was taken up with 1000 μL of toluene, from which 1-μL aliquots were injected in the splitless mode. GC-MS quantification was performed by selected-ion monitoring ions due to [M-PFB](-) and [M-PFB-H](-), m/z 150 and m/z 149 for NAPAP-d(0) and m/z 154 and m/z 153 for NAPAP-d(4), respectively. GC-MS/MS quantification was performed by selected-reaction monitoring the transition m/z 150 → m/z 107 and m/z 149 → m/z 134 for NAPAP-d(0) and m/z 154 → m/z 111 and m/z 153 → m/z 138 for NAPAP-d(4). The method was validated for human plasma (range, 0-130 μM NAPAP-d(0)) and urine (range, 0-1300 μM NAPAP-d(0)). Accuracy (recovery, %) ranged between 89 and 119%, and imprecision (RSD, %) was below 19% in these matrices and ranges. A close correlation (r>0.999) was found between the concentrations measured by GC-MS and GC-MS/MS. By this method, acetaminophen can be reliably quantified in small plasma and urine sample volumes (e.g., 10 μL). The analytical performance of the method makes it especially useful in pediatrics.", "title": "" }, { "docid": "802af4a1179602c086c4bbf73208ce16", "text": "BACKGROUND\nWe undertook a feasibility study to evaluate feasibility and utility of short message services (SMSs) to support Iraqi adults with newly diagnosed type 2 diabetes.\n\n\nSUBJECTS AND METHODS\nFifty patients from a teaching hospital clinic in Basrah in the first year after diagnosis were recruited to receive weekly SMSs relating to diabetes self-management over 29 weeks. Numbers of messages received, acceptability, cost, effect on glycated hemoglobin (HbA1c), and diabetes knowledge were documented.\n\n\nRESULTS\nForty-two patients completed the study, receiving an average 22 of 28 messages. Mean knowledge score rose from 8.6 (SD 1.5) at baseline to 9.9 (SD 1.4) 6 months after receipt of SMSs (P=0.002). Baseline and 6-month knowledge scores correlated (r=0.297, P=0.049). Mean baseline HbA1c was 79 mmol/mol (SD 14 mmol/mol) (9.3% [SD 1.3%]) and decreased to 70 mmol/mol (SD 13 mmol/mol) (8.6% [SD 1.2%]) (P=0.001) 6 months after the SMS intervention. Baseline and 6-month values were correlated (r=0.898, P=0.001). Age, gender, and educational level showed no association with changes in HbA1c or knowledge score. Changes in knowledge score were correlated with postintervention HbA1c (r=-0.341, P=0.027). All patients were satisfied with text messages and wished the service to be continued after the study. The cost of SMSs was €0.065 per message.\n\n\nCONCLUSIONS\nThis study demonstrates SMSs are acceptable, cost-effective, and feasible in supporting diabetes care in the challenging, resource-poor environment of modern-day Iraq. This study is the first in Iraq to demonstrate similar benefits of this technology on diabetes education and management to those seen from its use in better-resourced parts of the world. A randomized controlled trial is needed to assess precise benefits on self-care and knowledge.", "title": "" }, { "docid": "c3e63d82514b9e9b1cc172ea34f7a53e", "text": "Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016.\n We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.", "title": "" }, { "docid": "98c72706e0da844c80090c1ed5f3abeb", "text": "Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations.", "title": "" }, { "docid": "e678405fd86a3d8a52ecf779ea11758b", "text": "The high carrier mobility of graphene has been exploited in field-effect transistors that operate at high frequencies. Transistors were fabricated on epitaxial graphene synthesized on the silicon face of a silicon carbide wafer, achieving a cutoff frequency of 100 gigahertz for a gate length of 240 nanometers. The high-frequency performance of these epitaxial graphene transistors exceeds that of state-of-the-art silicon transistors of the same gate length.", "title": "" }, { "docid": "58390e457d03dfec19b0ae122a7c0e0b", "text": "A single-fed CP stacked patch antenna is proposed to cover all the GPS bands, including E5a/E5b for the Galileo system. The small aperture size (lambda/8 at the L5 band) and the single feeding property make this antenna a promising element for small GPS arrays. The design procedures and antenna performances are presented, and issues related to coupling between array elements are discussed.", "title": "" }, { "docid": "1f5758d1c9b470c9fa1c30e72f1257b7", "text": "This study aimed to describe the social and cultural etiology of violence against women in Jordan. A sample of houses was randomly selected from all 12 Governorates in Jordan, resulting in a final sample of 1,854 randomly selected women. ANOVA analysis showed significant differences in violence against women as a result of women’s education, F = 4.045, α = 0.003, women who work, F = 3.821, α = 0.001, espouser to violence F = 17.896, α = 0.000, experiencing violence during childhood F = 12.124, α = 0.000, and wife’s propensity to leave the marital relationship F = 12.124, α = 0.000. However, no differences were found in violence against women because of the husband’s education, husband’s work, or having friends who belief in physical punishment of kids. Findings showed women experienced 45 % or witnessed 55 % violence during their childhood. Almost all 98 % of the sample was subjected to at least one type of violence. Twenty-eight percent of the sample believed a husband has the right to control a woman’s behavior and 93 % believed a wife is obliged to obey a husband. After each abusive incidence, women felt insecure, ashamed, frightened, captive and stigmatized.", "title": "" }, { "docid": "72b7e2f1c960d0c5da639fca74aa188a", "text": "Some previous studies (e.g. that carried out by Van Bruggen et al. in 2004) have pointed to a need for additional research in order to firmly establish the usefulness of LSA (latent semantic analysis) parameters for automatic evaluation of academic essays. The extreme variability in approaches to this technique makes it difficult to identify the most efficient parameters and the optimum combination. With this goal in mind, we conducted a high spectrum study to investigate the efficiency of some of the major LSA parameters in small-scale corpora. We used two specific domain corpora that differed in the structure of the text (one containing only technical terms and the other with more tangential information). Using these corpora we tested different semantic spaces, formed by applying different parameters and different methods of comparing the texts. Parameters varied included weighting functions (Log-IDF or Log-Entropy), dimensionality reduction (truncating the matrices after SVD to a set percentage of dimensions), methods of forming pseudo-documents (vector sum and folding-in) and measures of similarity (cosine or Euclidean distances). We also included two groups of essays to be graded, one written by experts and other by non-experts. Both groups were evaluated by three human graders and also by LSA. We extracted the correlations of each LSA condition with human graders, and conducted an ANOVA to analyse which parameter combination correlates best. Results suggest that distances are more efficient in academic essay evaluation than cosines. We found no clear evidence that the classical LSA protocol works systematically better than some simpler version (the classical protocol achieves the best performance only for some combinations of parameters in a few cases), and found that the benefits of reducing dimensionality arise only when the essays are introduced into semantic spaces using the folding-in method. *Address correspondence to: José Antonio León, Dpto. de Psicologı́a Básica, Facultad de Psicologı́a, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28049 Madrid, Spain. Tel.: 0034 914975226. Fax: 0034 914975215. E-mail: joseantonio.leon@uam.es Journal of Quantitative Linguistics 2010, Volume 17, Number 1, pp. 1–29 DOI: 10.1080/09296170903395890 0929-6174/10/17010001 2010 Taylor & Francis", "title": "" }, { "docid": "39180c1e2636a12a9d46d94fe3ebfa65", "text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.", "title": "" }, { "docid": "ec2a377d643326c5e7f64f6f01f80a04", "text": "October 2006 | Volume 3 | Issue 10 | e294 Cultural competency has become a fashionable term for clinicians and researchers. Yet no one can defi ne this term precisely enough to operationalize it in clinical training and best practices. It is clear that culture does matter in the clinic. Cultural factors are crucial to diagnosis, treatment, and care. They shape health-related beliefs, behaviors, and values [1,2]. But the large claims about the value of cultural competence for the art of professional care-giving around the world are simply not supported by robust evaluation research showing that systematic attention to culture really improves clinical services. This lack of evidence is a failure of outcome research to take culture seriously enough to routinely assess the cost-effectiveness of culturally informed therapeutic practices, not a lack of effort to introduce culturally informed strategies into clinical settings [3].", "title": "" }, { "docid": "fda80f2f0eb57a101dde880b48a80ba4", "text": "In this paper, we analyze and compare existing human grasp taxonomies and synthesize them into a single new taxonomy (dubbed “The GRASP Taxonomy” after the GRASP project funded by the European Commission). We consider only static and stable grasps performed by one hand. The goal is to extract the largest set of different grasps that were referenced in the literature and arrange them in a systematic way. The taxonomy provides a common terminology to define human hand configurations and is important in many domains such as human-computer interaction and tangible user interfaces where an understanding of the human is basis for a proper interface. Overall, 33 different grasp types are found and arranged into the GRASP taxonomy. Within the taxonomy, grasps are arranged according to 1) opposition type, 2) the virtual finger assignments, 3) type in terms of power, precision, or intermediate grasp, and 4) the position of the thumb. The resulting taxonomy incorporates all grasps found in the reviewed taxonomies that complied with the grasp definition. We also show that due to the nature of the classification, the 33 grasp types might be reduced to a set of 17 more general grasps if only the hand configuration is considered without the object shape/size.", "title": "" }, { "docid": "854bd77e534e0bb53953edb708c867b1", "text": "About 60-GHz millimeter wave (mmWave) unlicensed frequency band is considered as a key enabler for future multi-Gbps WLANs. IEEE 802.11ad (WiGig) standard has been ratified for 60-GHz wireless local area networks (WLANs) by only considering the use case of peer to peer (P2P) communication coordinated by a single WiGig access point (AP). However, due to 60-GHz fragile channel, multiple number of WiGig APs should be installed to fully cover a typical target environment. Nevertheless, the exhaustive search beamforming training and the maximum received power-based autonomous users association prevent WiGig APs from establishing optimal WiGig concurrent links using random access. In this paper, we formulate the problem of WiGig concurrent transmissions in random access scenarios as an optimization problem, and then we propose a greedy scheme based on (2.4/5 GHz) Wi-Fi/(60 GHz) WiGig coordination to find out a suboptimal solution for it. In the proposed WLAN, the wide coverage Wi-Fi band is used to provide the control signalling required for launching the high date rate WiGig concurrent links. Besides, statistical learning using Wi-Fi fingerprinting is utilized to estimate the suboptimal candidate AP along with its suboptimal beam direction for establishing the WiGig concurrent link without causing interference to the existing WiGig data links while maximizing the total system throughput. Numerical analysis confirms the high impact of the proposed Wi-Fi/WiGig coordinated WLAN.", "title": "" } ]
scidocsrr
ac4d2a8d5dc71c2e4efd8ff6c53750be
Melody Extraction From Polyphonic Music Signals Using Pitch Contour Characteristics
[ { "docid": "e8933b0afcd695e492d5ddd9f87aeb81", "text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.", "title": "" }, { "docid": "b1422b2646f02a5a84a6a4b13f5ae7d8", "text": "Two experiments examined the influence of timbre on auditory stream segregation. In experiment 1, listeners heard sequences of orchestral tones equated for pitch and loudness, and they rated how strongly the instruments segregated. Multidimensional scaling analyses of these ratings revealed that segregation was based on the static and dynamic acoustic attributes that influenced similarity judgements in a previous experiment (P Iverson & CL Krumhansl, 1993). In Experiment 2, listeners heard interleaved melodies and tried to recognize the melodies played by a target timbre. The results extended the findings of Experiment 1 to tones varying pitch. Auditory stream segregation appears to be influenced by gross differences in static spectra and by dynamic attributes, including attack duration and spectral flux. These findings support a gestalt explanation of stream segregation and provide evidence against peripheral channel model.", "title": "" } ]
[ { "docid": "bf524461eb7eec362103452ed7c7f552", "text": "Over many years Jacques Mehler has provided us all with a wealth of surprising and complex results on both nature and nurture in language acquisition. He has shown that there are powerful and enduring effects of early (and even prenatal) experience on infant language perception, and also considerable prior knowledge that infants bring to the language acquisition task. He has shown strong age effects on second-language acquisition and its neural organization, and has also shown that profi­ ciency predicts cerebral organization for the second language. In his honor, we focus here on one of the problems he has addressed-the no­ tion of a critical period for language acquisition-and attempt to sort out the current state of the evidence. In recent years there has been much discussion about whether there is a critical, or sensitive, period for language acquisition. Two issues are implicit in this discussion: First, what would constitute evidence for a critical period, particularly in humans, where the time scale for develop­ ment is greater than that in the well-studied nonhuman cases, and where proficient behavioral outcomes might be achieved by more than one route? Second, what is the import of establishing, or failing to establish, such a critical period? What does this mean for our understanding of the computational and neural mechanisms underLying language acquisition? In this chapter we address these issues explicitly, by briefly reviewing the available evidence on a critical period for human language acquisi­ tion, and then by asking whether the evidence meets the expected criteria for critical or sensitive periods seen in other well-studied domains in hu­ man and nonhuman development. We conclude by stating what we think 482 Neuport, Bave/ier & Neville the outcome of this issue means (and does not mean) for our understand­ ing of language acquisition. What Is a Critical or Sensitive Period? Before beginning, we should state briefly what we (and others) mean by a critical or sensitive period. A critical or sensitive period for learning is shown when there is a relationship between the age (more technically, the developmental state of the organism) at which some crucial experience is presented to the organism and the amount of learning which results. ill most domains with critical or sensitive periods, the privileged time for learning occurs during early development, but this is not necessarily the case (ef. bonding in sheep, which occurs immediately surrounding partu­ rition). The important feature is …", "title": "" }, { "docid": "612271aa8848349735422395a91ffe7b", "text": "The contamination of groundwater by heavy metal, originating either from natural soil sources or from anthropogenic sources is a matter of utmost concern to the public health. Remediation of contaminated groundwater is of highest priority since billions of people all over the world use it for drinking purpose. In this paper, thirty five approaches for groundwater treatment have been reviewed and classified under three large categories viz chemical, biochemical/biological/biosorption and physico-chemical treatment processes. Comparison tables have been provided at the end of each process for a better understanding of each category. Selection of a suitable technology for contamination remediation at a particular site is one of the most challenging job due to extremely complex soil chemistry and aquifer characteristics and no thumb-rule can be suggested regarding this issue. In the past decade, iron based technologies, microbial remediation, biological sulphate reduction and various adsorbents played versatile and efficient remediation roles. Keeping the sustainability issues and environmental ethics in mind, the technologies encompassing natural chemistry, bioremediation and biosorption are recommended to be adopted in appropriate cases. In many places, two or more techniques can work synergistically for better results. Processes such as chelate extraction and chemical soil washings are advisable only for recovery of valuable metals in highly contaminated industrial sites depending on economical feasibility.", "title": "" }, { "docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7", "text": "41  Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.", "title": "" }, { "docid": "00669cc35f09b699e08fa7c8cc3701c8", "text": "Want to get experience? Want to get any ideas to create new things in your life? Read interpolation of spatial data some theory for kriging now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.", "title": "" }, { "docid": "61def8d760de928a8cae89f2699c51cf", "text": "OBJECTIVES\nTo describe the development and validation of a cancer awareness questionnaire (CAQ) based on a literature review of previous studies, focusing on cancer awareness and prevention.\n\n\nMATERIALS AND METHODS\nA total of 388 Chinese undergraduate students in a private university in Kuala Lumpur, Malaysia, were recruited to evaluate the developed self-administered questionnaire. The CAQ consisted of four sections: awareness of cancer warning signs and screening tests; knowledge of cancer risk factors; barriers in seeking medical advice; and attitudes towards cancer and cancer prevention. The questionnaire was evaluated for construct validity using principal component analysis and internal consistency using Cronbach's alpha (α) coefficient. Test-retest reliability was assessed with a 10-14 days interval and measured using Pearson product-moment correlation.\n\n\nRESULTS\nThe initial 77-item CAQ was reduced to 63 items, with satisfactory construct validity, and a high total internal consistency (Cronbach's α=0.77). A total of 143 students completed the questionnaire for the test-retest reliability obtaining a correlation of 0.72 (p<0.001) overall.\n\n\nCONCLUSIONS\nThe CAQ could provide a reliable and valid measure that can be used to assess cancer awareness among local Chinese undergraduate students. However, further studies among students from different backgrounds (e.g. ethnicity) are required in order to facilitate the use of the cancer awareness questionnaire among all university students.", "title": "" }, { "docid": "9d580c5b482a039b773d58714ee18ebb", "text": "We develop a recurrent reinforcement learning (RRL) system that directly induces portfolio management policies from time series of asset pri ces and indicators, while accounting for transaction costs. The RRL approach le arns a direct mapping from indicator series to portfolio weights, bypassing the need to explicitly model the time series of price returns. The resulting polici es dynamically optimize the portfolio Sharpe ratio, while incorporating changing c onditions and transaction costs. A key problem with many portfolio optimization m ethods, including Markowitz, is discovering ”corner solutions” with weight c oncentrated on just a few assets. In a dynamic context, naive portfolio algorithm s can exhibit switching behavior, particularly when transaction costs are ignored . In this work, we extend the RRL approach to produce better diversified portfoli os and smoother asset allocations over time. The solutions we propose are to inclu de realistic transaction costs and to shrink portfolio weights toward the prior p tfolio. The methods are assessed on a global asset allocation problem consistin g of the Pacific, North America and Europe MSCI International Equity Indices.", "title": "" }, { "docid": "09af9b0987537e54b7456fb36407ffe3", "text": "The introduction of high-speed backplane transceivers inside FPGAs has addressed critical issues such as the ease in scalability of performance, high availability, flexible architectures, the use of standards, and rapid time to market. These have been crucial to address the ever-increasing demand for bandwidth in communication and storage systems [1-3], requiring novel techniques in receiver (RX) and clocking circuits.", "title": "" }, { "docid": "5cc3ce9628b871d57f086268ae1510e0", "text": "Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide realtime feedback to consumers to encourage more efficient use of electricity.", "title": "" }, { "docid": "6c6afdefc918e6dfdb6bc5f5bb96cf45", "text": "Due to the complexity and uncertainty of socioeconomic environments and cognitive diversity of group members, the cognitive information over alternatives provided by a decision organization consisting of several experts is usually uncertain and hesitant. Hesitant fuzzy preference relations provide a useful means to represent the hesitant cognitions of the decision organization over alternatives, which describe the possible degrees that one alternative is preferred to another by using a set of discrete values. However, in order to depict the cognitions over alternatives more comprehensively, besides the degrees that one alternative is preferred to another, the decision organization would give the degrees that the alternative is non-preferred to another, which may be a set of possible values. To effectively handle such common cases, in this paper, the dual hesitant fuzzy preference relation (DHFPR) is introduced and the methods for group decision making (GDM) with DHFPRs are investigated. Firstly, a new operator to aggregate dual hesitant fuzzy cognitive information is developed, which treats the membership and non-membership information fairly, and can generate more neutral results than the existing dual hesitant fuzzy aggregation operators. Since compatibility is a very effective tool to measure the consensus in GDM with preference relations, then two compatibility measures for DHFPRs are proposed. After that, the developed aggregation operator and compatibility measures are applied to GDM with DHFPRs and two GDM methods are designed, which can be applied to different decision making situations. Each GDM method involves a consensus improving model with respect to DHFPRs. The model in the first method reaches the desired consensus level by adjusting the group members’ preference values, and the model in the second method improves the group consensus level by modifying the weights of group members according to their contributions to the group decision, which maintains the group members’ original opinions and allows the group members not to compromise for reaching the desired consensus level. In actual applications, we may choose a proper method to solve the GDM problems with DHFPRs in light of the actual situation. Compared with the GDM methods with IVIFPRs, the proposed methods directly apply the original DHFPRs to decision making and do not need to transform them into the IVIFPRs, which can avoid the loss and distortion of original information, and thus can generate more precise decision results.", "title": "" }, { "docid": "2ebf4b32598ba3cd74513f1bab8fe447", "text": "Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis is an autoimmune disorder of the central nervous system (CNS). Its immunopathogenesis has been proposed to include early cerebrospinal fluid (CSF) lymphocytosis, subsequent CNS disease restriction and B cell mechanism predominance. There are limited data regarding T cell involvement in the disease. To contribute to the current knowledge, we investigated the complex system of chemokines and cytokines related to B and T cell functions in CSF and sera samples from anti-NMDAR encephalitis patients at different time-points of the disease. One patient in our study group had a long-persisting coma and underwent extraordinary immunosuppressive therapy. Twenty-seven paired CSF/serum samples were collected from nine patients during the follow-up period (median 12 months, range 1–26 months). The patient samples were stratified into three periods after the onset of the first disease symptom and compared with the controls. Modified Rankin score (mRS) defined the clinical status. The concentrations of the chemokines (C-X-C motif ligand (CXCL)10, CXCL8 and C-C motif ligand 2 (CCL2)) and the cytokines (interferon (IFN)γ, interleukin (IL)4, IL7, IL15, IL17A and tumour necrosis factor (TNF)α) were measured with Luminex multiple bead technology. The B cell-activating factor (BAFF) and CXCL13 concentrations were determined via enzyme-linked immunosorbent assay. We correlated the disease period with the mRS, pleocytosis and the levels of all of the investigated chemokines and cytokines. Non-parametric tests were used, a P value <0.05 was considered to be significant. The increased CXCL10 and CXCL13 CSF levels accompanied early-stage disease progression and pleocytosis. The CSF CXCL10 and CXCL13 levels were the highest in the most complicated patient. The CSF BAFF levels remained unchanged through the periods. In contrast, the CSF levels of T cell-related cytokines (INFγ, TNFα and IL17A) and IL15 were slightly increased at all of the periods examined. No dynamic changes in chemokine and cytokine levels were observed in the peripheral blood. Our data support the hypothesis that anti-NMDAR encephalitis is restricted to the CNS and that chemoattraction of immune cells dominates at its early stage. Furthermore, our findings raise the question of whether T cells are involved in this disease.", "title": "" }, { "docid": "2baad8633f9a76199f205a7560fed30c", "text": "Mobile Cloud Computing (MCC) has revolutionized the way in which mobile subscribers across the globe leverage services on the go. The mobile devices have evolved from mere devices that enabled voice calls only a few years back to smart devices that enable the user to access value added services anytime, anywhere. MCC integrates cloud computing into the mobile environment and overcomes obstacles related to performance (e.g. battery life, storage, and bandwidth), environment (e.g. heterogeneity, scalability, availability) and security (e.g. reliability and privacy).", "title": "" }, { "docid": "9006586ffd85d5c2fb7611b3b0332519", "text": "Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it’s seen as key to the human capacity for generalization in language. Recent work (Lake and Baroni, 2018) has studied systematic compositionality in modern seq2seq models using generalization to novel navigation instructions in a grounded environment as a probing tool. Lake and Baroni’s main experiment required the models to quickly bootstrap the meaning of new words. We extend this framework here to settings where the model needs only to recombine well-trained functional words (such as “around” and “right”) in novel contexts. Our findings confirm and strengthen the earlier ones: seq2seq models can be impressively good at generalizing to novel combinations of previously-seen input, but only when they receive extensive training on the specific pattern to be generalized (e.g., generalizing from many examples of “X around right” to “jump around right”), while failing when generalization requires novel application of compositional rules (e.g., inferring the meaning of “around right” from those of “right” and “around”).", "title": "" }, { "docid": "1a66d5b6925bb30e5cadcdd23d43ef97", "text": "The measurement of enterprise resource planning (ERP) systems success or effectiveness is critical to our understanding of the value and efficacy of ERP investment and managerial actions. Whether traditional information systems success models can be extended to investigating ERP systems success is yet to be investigated. This paper proposes a partial extension and respecification of the DeLone and MacLean model of IS success to ERP systems. The purpose of the present research is to re-examine the updated DeLone and McLean model [W. DeLone, E. McLean, The DeLone McLean model of information system success: a ten-year update, Journal of Management Information Systems 19 (4) (2003) 3–9] of ERP systems success. The updated DeLone and McLean model was applied to collect data from the questionnaires answered by 204 users of ERP systems at three high-tech firms in Taiwan. Finally, this study suggests that system quality, service quality, and information quality are most important successful factors. # 2007 Elsevier B.V. All rights reserved. www.elsevier.com/locate/compind Computers in Industry 58 (2007) 783–793", "title": "" }, { "docid": "7e57c7abcd4bcb79d5f0fe8b6cd9a836", "text": "Among the many viruses that are known to infect the human liver, hepatitis B virus (HBV) and hepatitis C virus (HCV) are unique because of their prodigious capacity to cause persistent infection, cirrhosis, and liver cancer. HBV and HCV are noncytopathic viruses and, thus, immunologically mediated events play an important role in the pathogenesis and outcome of these infections. The adaptive immune response mediates virtually all of the liver disease associated with viral hepatitis. However, it is becoming increasingly clear that antigen-nonspecific inflammatory cells exacerbate cytotoxic T lymphocyte (CTL)-induced immunopathology and that platelets enhance the accumulation of CTLs in the liver. Chronic hepatitis is characterized by an inefficient T cell response unable to completely clear HBV or HCV from the liver, which consequently sustains continuous cycles of low-level cell destruction. Over long periods of time, recurrent immune-mediated liver damage contributes to the development of cirrhosis and hepatocellular carcinoma.", "title": "" }, { "docid": "1b8e90d78ca21fcaa5cca628cba4111a", "text": "The Rutgers Master II-ND glove is a haptic interface designed for dextrous interactions with virtual environments. The glove provides force feedback up to 16 N each to the thumb, index, middle, and ring fingertips. It uses custom pneumatic actuators arranged in a direct-drive configuration in the palm. Unlike commercial haptic gloves, the direct-drive actuators make unnecessary cables and pulleys, resulting in a more compact and lighter structure. The force-feedback structure also serves as position measuring exoskeleton, by integrating noncontact Hall-effect and infrared sensors. The glove is connected to a haptic-control interface that reads its sensors and servos its actuators. The interface has pneumatic servovalves, signal conditioning electronics, A/D/A boards, power supply and an imbedded Pentium PC. This distributed computing assures much faster control bandwidth than would otherwise be possible. Communication with the host PC is done over an RS232 line. Comparative data with the CyberGrasp commercial haptic glove is presented.", "title": "" }, { "docid": "8bfdf2be75d41df6fe4738231241c1a3", "text": "In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf singlevector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remark-", "title": "" }, { "docid": "97c9d91709c98cd6dd803ffc9810d88f", "text": "Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graphlabeled inputs.", "title": "" }, { "docid": "8a33040d6464f7792b3eeee1e0760925", "text": "We live in a data abundance era. Availability of large volume of diverse multimedia data streams (ranging from video, to tweets, to activity, and to PM2.5) can now be used to solve many critical societal problems. Causal modeling across multimedia data streams is essential to reap the potential of this data. However, effective frameworks combining formal abstract approaches with practical computational algorithms for causal inference from such data are needed to utilize available data from diverse sensors. We propose a causal modeling framework that builds on data-driven techniques while emphasizing and including the appropriate human knowledge in causal inference. We show that this formal framework can help in designing a causal model with a systematic approach that facilitates framing sharper scientific questions, incorporating expert's knowledge as causal assumptions, and evaluating the plausibility of these assumptions. We show the applicability of the framework in a an important Asthma management application using meteorological and pollution data streams.", "title": "" }, { "docid": "13cb793ca9cdf926da86bb6fc630800a", "text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.", "title": "" } ]
scidocsrr
75a800343177a572e7d7e368a2bb87af
Probability of stroke: a risk profile from the Framingham Study.
[ { "docid": "cf506587f2699d88e4a2e0be36ccac41", "text": "A complete list of the titles in this series appears at the end of this volume. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic format.", "title": "" } ]
[ { "docid": "afe24ba1c3f3423719a98e1a69a3dc70", "text": "This brief presents a nonisolated multilevel linear amplifier with nonlinear component (LINC) power amplifier (PA) implemented in a standard 0.18-μm complementary metal-oxide- semiconductor process. Using a nonisolated power combiner, the overall power efficiency is increased by reducing the wasted power at the combined out-phased signal; however, the efficiency at low power still needs to be improved. To further improve the efficiency of the low-power (LP) mode, we propose a multiple-output power-level LINC PA, with load modulation implemented by switches. In addition, analysis of the proposed design on the system level as well as the circuit level was performed to optimize its performance. The measurement results demonstrate that the proposed technique maintains more than 45% power-added efficiency (PAE) for peak power at 21 dB for the high-power mode and 17 dBm for the LP mode at 600 MHz. The PAE for a 6-dB peak-to-average ratio orthogonal frequency-division multiplexing modulated signal is higher than 24% PAE in both power modes. To the authors' knowledge, the proposed output-phasing PA is the first implemented multilevel LINC PA that uses quarter-wave lines without multiple power supply sources.", "title": "" }, { "docid": "5318baa10a6db98a0f31c6c30fdf6104", "text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.", "title": "" }, { "docid": "a999bf3da879dde7fc2acb8794861daf", "text": "Most OECD Member countries have sought to renew their systems and structures of public management in the last 10-15 years. Some started earlier than others and the emphasis will vary among Member countries according to their historic traditions and institutions. There is no single best model of public management, but what stands out most clearly is the extent to which countries have pursued and are pursuing broadly common approaches to public management reform. This is most probably because countries have been responding to essentially similar pressures to reform.", "title": "" }, { "docid": "8c3ecd27a695fef2d009bbf627820a0d", "text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.", "title": "" }, { "docid": "3dfa264bb5b7e4620a2a9efa70c99db4", "text": "Recent advances in pattern analysis techniques together with the advent of miniature vibration sensors and high speed data acquisition technologies provide a unique opportunity to develop and implement in-situ, beneficent, and non-intrusive condition monitoring and quality assessment methods for a broad range of rotating machineries. This invited paper provides an overview of such a framework. It provides a review of classical methods used in vibration signal processing in both time and frequency domain. Subsequently, a collection of recent computational intelligence based methods in this problem domain with case studies using both single and multi-dimensional signals is presented. The datasets used in these case studies have been acquired from a variety of real-life problems 1 Vibration and Condition Monitoring Vibration signals provide useful information that leads to insights on the operating condition of the equipment under test [1, 2]. By inspecting the physical characteristics of the vibration signals, one is able to detect the presence of a fault in an operating machine, to localise the position of a crack in gear, to diagnose the health state of a ball bearing, etc. For decades, researchers are looking at means to diagnose automatically the health state of rotating machines, from the smaller bearings and gears to the larger combustion engines and turbines. With the advent of wireless technologies and miniature transducers, we are now able to monitor machine operating condition in real time and, with the aid of computational intelligence and pattern recognition technique, in an automated fashion. This paper draws from a collection of past and recent works in the area of automatic machine condition monitoring using vibration signals. Typically, vibration signals are acquired through vibration sensors. The three main classes of vibration sensors are displacement sensors, velocity sensors, and accelerometers. Displacement sensors can be non-contact sensors as in the case of optical sensors and they are more sensitive in the lower frequency range, typically less than 1 kHz. Velocity sensors, on the other hand, operate more effectively with flat amplitude response in the 10 Hz to 2 kHz range. Among these sensors, accelerometers have the best amplitude response in the high frequency range up to tens of kHz. Usually, accelerometers are built using capacitive sensing, or more commonly, a piezoelectric mechanism. Accelerometers are usually light weight ranging from 0.4 gram to 50 gram. 1.1 Advantages of vibration signal monitoring Vibration signal processing has some obvious advantages. First, vibration sensors are non-intrusive, and at times non-contact. As such, we can perform diagnostic in a non-destructive manner. Second, vibration signals can be obtained online and in-situ. This is a desired feature for production lines. The trending capability also provides means to predictive maintenance of the machineries. As such, unnecessary downtime for preventive maintenance can be minimized. Third, the vibration sensors are inexpensive and widely available. Modern mobile smart devices are equipped with one tri-axial accelerometer typically. Moreover, the technologies to acquire and convert the analogue outputs from the sensors are affordable nowadays. Last but not least, techniques for diagnosing a wide range", "title": "" }, { "docid": "bc018ef7cbcf7fc032fe8556016d08b1", "text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.", "title": "" }, { "docid": "a0f4b7f3f9f2a5d430a3b8acead2b746", "text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse", "title": "" }, { "docid": "274373d46b748d92e6913496507353b1", "text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.", "title": "" }, { "docid": "90d82110c2b10c98c5cb99d68ebb9df3", "text": "Purpose – The purpose of this paper is to investigate the demographic characteristics of small and medium enterprises (SMEs) with regards to their patterns of internet-based information and communications technology (ICT) adoption, taking into account the dimensions of ICT benefits, barriers, and subsequently adoption intention. Design/methodology/approach – A questionnaire-based survey is used to collect data from 406 managers or owners of SMEs in Malaysia. Findings – The results reveal that the SMEs would adopt internet-based ICT regardless of years of business start-up and internet experience. Some significant differences are spotted between manufacturing and service SMEs in terms of their demographic characteristics and internet-based ICT benefits, barriers, and adoption intention. Both the industry types express intention to adopt internet-based ICT, with the service-based SMEs demonstrating greater intention. Research limitations/implications – The paper focuses only on the SMEs in the southern region of Malaysia. Practical implications – The findings offer valuable insights to the SMEs – in particular promoting internet-based ICT adoption for future business success. Originality/value – This paper is perhaps one of the first to comprehensively investigate the relationship between demographic characteristics of SMEs and the various variables affecting their internet-based ICT adoption intention.", "title": "" }, { "docid": "4d4540a59e637f9582a28ed62083bfd6", "text": "Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.", "title": "" }, { "docid": "641fa9e397e1ce6e320ec4cacfd3064f", "text": "Machine translation is a natural candidate problem for reinforcement learning from human feedback: users provide quick, dirty ratings on candidate translations to guide a system to improve. Yet, current neural machine translation training focuses on expensive human-generated reference translations. We describe a reinforcement learning algorithm that improves neural machine translation systems from simulated human feedback. Our algorithm combines the advantage actor-critic algorithm (Mnih et al., 2016) with the attention-based neural encoderdecoder architecture (Luong et al., 2015). This algorithm (a) is well-designed for problems with a large action space and delayed rewards, (b) effectively optimizes traditional corpus-level machine translation metrics, and (c) is robust to skewed, high-variance, granular feedback modeled after actual human behaviors.", "title": "" }, { "docid": "8d40a30ba43e055cf830af0514f01c9d", "text": "The rapid growth of data size and accessibility in recent years has instigated a shift of philosophy in algorithm design for artificial intelligence. Instead of engineering algorithms by hand, the ability to learn composable systems automatically from massive amounts of data has led to groundbreaking performance in important domains such as computer vision, speech recognition, and natural language processing. The most popular class of techniques used in these domains is called deep learning , and is seeing significant attention from industry. However, these models require incredible amounts of data and compute power to train, and are limited by the need for better hardware acceleration to accommodate scaling beyond current data and model sizes. While the current solution has been to use clusters of graphics processing units (GPU) as general purpose processors (GPGPU), the use of field programmable gate arrays (FPGA) provide an interesting alternative. Current trends in design tools for FPGAs have made them more compatible with the high-level software practices typically practiced in the deep learning community, making FPGAs more accessible to those who build and deploy models. Since FPGA architectures are flexible, this could also allow researchers the ability to explore model-level optimizations beyond what is possible on fixed architectures such as GPUs. As well, FPGAs tend to provide high performance per watt of power consumption, which is of particular importance for application scientists interested in large scale server-based deployment or resource-limited embedded applications. This review takes a look at deep learning and FPGAs from a hardware acceleration perspective, identifying trends and innovations that make these technologies a natural fit, and motivates a discussion on how FPGAs may best serve the needs of the deep learning community moving forward.", "title": "" }, { "docid": "7e4a4e76ba976a24151b243148a2feb4", "text": "Amodel based clustering procedure for data of mixed type, clustMD, is developed using a latent variable model. It is proposed that a latent variable, following a mixture of Gaussian distributions, generates the observed data of mixed type. The observed data may be any combination of continuous, binary, ordinal or nominal variables. clustMD employs a parsimonious covariance structure for the latent variables, leading to a suite of six clustering models that vary in complexity and provide an elegant and unified approach to clustering mixed data. An expectation maximisation (EM) algorithm is used to estimate clustMD; in the presence of nominal data a Monte Carlo EM algorithm is required. The clustMD model is illustrated by clustering simulated mixed type data and prostate cancer patients, on whom mixed data have been recorded.", "title": "" }, { "docid": "2d54a447df50a31c6731e513bfbac06b", "text": "Lumbar intervertebral disc diseases are among the main causes of lower back pain (LBP). Desiccation is a common disease resulting from various reasons and ultimately most people are affected by desiccation at some age. We propose a probabilistic model that incorporates intervertebral disc appearance and contextual information for automating the diagnosis of lumbar disc desiccation. We utilize a Gibbs distribution for processing localized lumbar intervertebral discs' appearance and contextual information. We use 55 clinical T2-weighted MRI for lumbar area and achieve over 96% accuracy on a cross validation experiment.", "title": "" }, { "docid": "40c9250b3fb527425138bc41acf8fd4e", "text": "Noise pollution is a major problem in cities around the world. The current methods to assess it neglect to represent the real exposure experienced by the citizens themselves, and therefore could lead to wrong conclusions and a biased representations. In this paper we present a novel approach to monitor noise pollution involving the general public. Using their mobile phones as noise sensors, we provide a low cost solution for the citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. Our prototype, called NoiseTube, can be found online [1].", "title": "" }, { "docid": "db41f44f0ecccdd1828ac2789c2cedc9", "text": "Porter’s generic strategy matrix, which highlights cost leadership, differentiation and focus as the three basic choices for firms, has dominated corporate competitive strategy for the last thirty years. According to this model, a venture can choose how it wants to compete, based on the match between its type of competitive advantage and the market target pursued, as the key determinants of choice (Akan, Allen, Helms & Spralls, 2006:43).", "title": "" }, { "docid": "4fb301cffa66f37c07bd6c44a108e142", "text": "Unambiguous identities of resources are important aspect for semantic web. This paper addresses the personal identity issue in the context of bibliographies. Because of abbreviations or misspelling of names in publications or bibliographies, an author may have multiple names and multiple authors may share the same name. Such name ambiguity affects the performance of identity matching, document retrieval and database federation, and causes improper attribution of research credit. This paper describes a new K-means clustering algorithm based on an extensible Naïve Bayes probability model to disambiguate authors with the same first name initial and last name in the bibliographies and proposes a canonical name. The model captures three types of bibliographic information: coauthor names, the title of the paper and the title of the journal or proceeding. The algorithm achieves best accuracies of 70.1% and 73.6% on disambiguating 6 different “J Anderson” s and 9 different \"J Smith\" s based on the citations collected from researchers’ publication web pages.", "title": "" }, { "docid": "ab0d19b1cb4a0f5d283f67df35c304f4", "text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.", "title": "" }, { "docid": "96a280588f4f5e61a4470ffc1277efa9", "text": "Hyperspectral data acquired from field-based platforms present new challenges for their analysis, particularly for complex vertical surfaces exposed to large changes in the geometry and intensity of illumination. The use of hyperspectral data to map rock types on a vertical mine face is demonstrated, with a view to providing real-time information for automated mining applications. The performance of two classification techniques, namely, spectral angle mapper (SAM) and support vector machines (SVMs), is compared rigorously using a spectral library acquired under various conditions of illumination. SAM and SVM are then applied to a mine face, and results are compared with geological boundaries mapped in the field. Effects of changing conditions of illumination, including shadow, were investigated by applying SAM and SVM to imagery acquired at different times of the day. As expected, classification of the spectral libraries showed that, on average, SVM gave superior results for SAM, although SAM performed better where spectra were acquired under conditions of shadow. In contrast, when applied to hypserspectral imagery of a mine face, SVM did not perform as well as SAM. Shadow, through its impact upon spectral curve shape and albedo, had a profound impact on classification using SAM and SVM.", "title": "" }, { "docid": "f90e6d3084733994935fcbee64286aec", "text": "To find the position of an acoustic source in a room, typically, a set of relative delays among different microphone pairs needs to be determined. The generalized cross-correlation (GCC) method is the most popular to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, the idea of cross-correlation coefficient between two random signals is generalized to the multichannel case by using the notion of spatial prediction. The multichannel spatial correlation matrix is then deduced and its properties are discussed. We then propose a new method based on the multichannel spatial correlation matrix for time delay estimation. It is shown that this new approach can take advantage of the redundancy when more than two microphones are available and this redundancy can help the estimator to better cope with noise and reverberation.", "title": "" } ]
scidocsrr
f76b344b0c6bb08c656c098fb6f42633
Semantic Stixels: Depth is not enough
[ { "docid": "d7562b32dc75c3b599980006ce924251", "text": "This work concentrates on vision processing for ADAS and intelligent vehicle applications. We propose a color extension to the disparity-based Stixel World method, so that the road can be robustly distinguished from obstacles with respect to erroneous disparity measurements. Our extension learns color appearance models for road and obstacle classes in an online and self-supervised fashion. The algorithm is tightly integrated within the core of the optimization process of the original Stixel World, allowing for strong fusion of the disparity and color signals. We perform an extensive evaluation, including different self-supervised learning strategies and different color models. Our newly recorded, publicly available data set is intentionally focused on challenging traffic scenes with many low-texture regions, causing numerous disparity artifacts. In this evaluation, we increase the F-score of the drivable distance from 0.86 to 0.97, compared to a tuned version of the state-of-the-art baseline method. This clearly shows that our color extension increases the robustness of the Stixel World, by reducing the number of falsely detected obstacles while not deteriorating the detection of true obstacles.", "title": "" }, { "docid": "a77eddf9436652d68093946fbe1d2ed0", "text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "title": "" } ]
[ { "docid": "afa3fa35061b54c1ca662f0885b2e4be", "text": "This paper discusses an analytical study that quantifies the expected earthquake-induced losses in typical office steel frame buildings designed with perimeter special moment frames in highly seismic regions. It is shown that for seismic events associated with low probabilities of occurrence, losses due to demolition and collapse may be significantly overestimated when the expected loss computations are based on analytical models that ignore the composite beam effects and the interior gravity framing system of a steel frame building. For frequently occurring seismic events building losses are dominated by non-structural content repairs. In this case, the choice of the analytical model representation of the steel frame building becomes less important. Losses due to demolition and collapse in steel frame buildings with special moment frames designed with strong-column/weak-beam ratio larger than 2.0 are reduced by a factor of two compared with those in the same frames designed with a strong-column/weak-beam ratio larger than 1.0 as recommended in ANSI/AISC-341-10. The expected annual losses (EALs) of steel frame buildings with SMFs vary from 0.38% to 0.74% over the building life expectancy. The EALs are dominated by repairs of accelerationsensitive non-structural content followed by repairs of drift-sensitive non-structural components. It is found that the effect of strong-column/weak-beam ratio on EALs is negligible. This is not the case when the present value of life-cycle costs is selected as a loss-metric. It is advisable to employ a combination of loss-metrics to assess the earthquake-induced losses in steel frame buildings with special moment frames depending on the seismic performance level of interest. Copyright c © 2017 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "8442bf64a1c89bbddb6ffb8001b1381e", "text": "In this paper we present a scalable hardware architecture to implement large-scale convolutional neural networks and state-of-the-art multi-layered artificial vision systems. This system is fully digital and is a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images. We present a performance comparison between a software, FPGA and ASIC implementation that shows a speed up in custom hardware implementations.", "title": "" }, { "docid": "ce24b783f2157fdb4365b60aa2e6163a", "text": "Geosciences is a field of great societal relevance that requires solutions to several urgent problems facing our humanity and the planet. As geosciences enters the era of big data, machine learning (ML)— that has been widely successful in commercial domains—offers immense potential to contribute to problems in geosciences. However, problems in geosciences have several unique challenges that are seldom found in traditional applications, requiring novel problem formulations and methodologies in machine learning. This article introduces researchers in the machine learning (ML) community to these challenges offered by geoscience problems and the opportunities that exist for advancing both machine learning and geosciences. We first highlight typical sources of geoscience data and describe their properties that make it challenging to use traditional machine learning techniques. We then describe some of the common categories of geoscience problems where machine learning can play a role, and discuss some of the existing efforts and promising directions for methodological development in machine learning. We conclude by discussing some of the emerging research themes in machine learning that are applicable across all problems in the geosciences, and the importance of a deep collaboration between machine learning and geosciences for synergistic advancements in both disciplines.", "title": "" }, { "docid": "7fc3dfcc8fa43c36938f41877a65bed7", "text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1", "title": "" }, { "docid": "b41c05577d59271495ce60c104469854", "text": "A method for human head pose estimation in multicamera environments is proposed. The method computes the textured visual hull of the subject and unfolds the texture of the head on a hypothetical sphere around it, whose parameterization is iteratively rotated so that the face eventually occurs on its equator. This gives rise to a spherical image, in which face detection is simplified, because exactly one frontal face is guaranteed to appear in it. In this image, the face center yields two components of pose (yaw, pitch), while the third (roll) is retrieved from the orientation of the major symmetry axis of the face. Face detection applied on the original images reduces the required iterations and anchors tracking drift. The method is demonstrated and evaluated in several data sets, including ones with known ground truth. Experimental results show that the proposed method is accurate and robust to distant imaging, despite the low-resolution appearance of subjects.", "title": "" }, { "docid": "1819af3b3d96c182b7ea8a0e89ba5bbe", "text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.", "title": "" }, { "docid": "400d7dd2d6575edc3a5f34667a8eb426", "text": "The Internet has facilitated the emergence of new strategies and business models in several industries. In the UK, significant changes are happening in supermarket retailing with the introduction of online shopping, especially in terms of channel development and coordination, business scope redefinition, the development of fulfilment centre model and core processes, new ways of customer value creation, and online partnerships. In fact the role of online supermarket itself has undergone some significant changes in the last few years. Based on recent empirical evidence gathered in the UK, this paper will illustrate current developments in the strategies and business models of online supermarket retailing. The main evidence has been collected through an online survey of 6 online supermarkets and in-depth case studies of two leading players. Some of the tendencies are comparable to what happened in retail banking with the introduction of Internet banking, but other tendencies are unique to the supermarket retailing industry. This is a rapidly evolving area and further studies are clearly needed.", "title": "" }, { "docid": "87fe73a5bc0b80fd0af1d0e65d1039c1", "text": "Reactive programming improves the design of reactive applications by relocating the logic for managing dependencies between dependent values away from the application logic to the language implementation. Many distributed applications are reactive. Yet, existing change propagation algorithms are not suitable in a distributed setting.\n We propose Distributed REScala, a reactive language with a change propagation algorithm that works without centralized knowledge about the topology of the dependency structure among reactive values and avoids unnecessary propagation of changes, while retaining safety guarantees (glitch freedom). Distributed REScala enables distributed reactive programming, bringing the benefits of reactive programming to distributed applications. We demonstrate the enabled design improvements by a case study. We also empirically evaluate the performance of our algorithm in comparison to other algorithms in a simulated distributed setting.", "title": "" }, { "docid": "7b5f6f0e3c1af5cc4047b8cec373de24", "text": "Recognizing lexical entailment (RLE) always plays an important role in inference of natural language, i.e., identifying whether one word entails another, for example, fox entails animal. In the literature, automatically recognizing lexical entailment for word pairs deeply relies on words’ contextual representations. However, as a “prototype” vector, a single representation cannot reveal multifaceted aspects of the words due to their homonymy and polysemy. In this paper, we propose a supervised Context-Enriched Neural Network (CENN) method for recognizing lexical entailment. To be specific, we first utilize multiple embedding vectors from different contexts to represent the input word pairs. Then, through different combination methods and attention mechanism, we integrate different embedding vectors and optimize their weights to predict whether there are entailment relations in word pairs. Moreover, our proposed framework is flexible and open to handle different word contexts and entailment perspectives in the text corpus. Extensive experiments on five datasets show that our approach significantly improves the performance of automatic RLE in comparison with several state-of-the-art methods.", "title": "" }, { "docid": "bd3e5a403cc42952932a7efbd0d57719", "text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter", "title": "" }, { "docid": "53821da1274fd420fe0f7eeba024b95d", "text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.", "title": "" }, { "docid": "ee473a0bb8b96249e61ad5e3925c11c2", "text": "Simple, short, and compact hashtags cover a wide range of information on social networks. Although many works in the field of natural language processing (NLP) have demonstrated the importance of hashtag recommendation, hashtag recommendation for images has barely been studied. In this paper, we introduce the HARRISON dataset, a benchmark on hashtag recommendation for real world images in social networks. The HARRISON dataset is a realistic dataset, composed of 57,383 photos from Instagram and an average of 4.5 associated hashtags for each photo. To evaluate our dataset, we design a baseline framework consisting of visual feature extractor based on convolutional neural network (CNN) and multi-label classifier based on neural network. Based on this framework, two single feature-based models, object-based and scene-based model, and an integrated model of them are evaluated on the HARRISON dataset. Our dataset shows that hashtag recommendation task requires a wide and contextual understanding of the situation conveyed in the image. As far as we know, this work is the first vision-only attempt at hashtag recommendation for real world images in social networks. We expect this benchmark to accelerate the advancement of hashtag recommendation.", "title": "" }, { "docid": "38102dfe63b707499c2f01e2e46b4031", "text": "Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.", "title": "" }, { "docid": "7d7c596d334153f11098d9562753a1ee", "text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.", "title": "" }, { "docid": "979a3ca422e92147b25ca1b8e8ff9e5a", "text": "Open Information Extraction (Open IE) is a promising approach for unrestricted Information Discovery (ID). While Open IE is a highly scalable approach, allowing unsupervised relation extraction from open domains, it currently has some limitations. First, it lacks the expressiveness needed to properly represent and extract complex assertions that are abundant in text. Second, it does not consolidate the extracted propositions, which causes simple queries above Open IE assertions to return insufficient or redundant information. To address these limitations, we propose in this position paper a novel representation for ID – Propositional Knowledge Graphs (PKG). PKGs extend the Open IE paradigm by representing semantic inter-proposition relations in a traversable graph. We outline an approach for constructing PKGs from single and multiple texts, and highlight a variety of high-level applications that may leverage PKGs as their underlying information discovery and representation framework.", "title": "" }, { "docid": "0c1f01d9861783498c44c7c3d0acd57e", "text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.", "title": "" }, { "docid": "431581766931936e22acdae57fb192be", "text": "Social network analysis (SNA), in essence, is not a formal theory in social science, but rather an approach for investigating social structures, which is why SNA is often referred to as structural analysis [1]. The most important difference between social network analysis and the traditional or classic social research approach is that the contexts of the social actor, or the relationships between actors are the first considerations of the former, while the latter focuses on individual properties. A social network is a group of collaborating, and/or competing individuals or entities that are related to each other. It may be presented as a graph, or a multi-graph; each participant in the collaboration or competition is called an actor and depicted as a node in the graph theory. Valued relations between actors are depicted as links, or ties, either directed or undirected, between the corresponding nodes. Actors can be persons, organizations, or groups – any set of related entities. As such, SNA may be used on different levels, ranging from individuals, web pages, families, small groups, to large organizations, parties, and even to nations. According to the well known SNA researcher Lin Freeman [2], network analysis is based on the intuitive notion that these patterns are important features of the lives of the individuals or social entities who display them; Network analysts believe that how an individual lives, or social entity depends in large part on how that they are tied into the larger web of social connections/structures. Many believe, moreover, that the success or failure of societies and organizations often depends on the patterning of their internal structure. With a history of more than 70 years, SNA as an interdisciplinary technique developed under many influences, which come from different fields such as sociology, mathematics and computer science, are becoming increasingly important across many disciplines, including sociology, economics, communication science, and psychology around the world. In the current chapter of this book, the author discusses", "title": "" }, { "docid": "835fd7a4410590a3d848222eb3159aeb", "text": "Modularity in organizations can facilitate the creation and development of dynamic capabilities. Paradoxically, however, modular management can also stifle the strategic potential of such capabilities by conflicting with the horizontal integration of units. We address these issues through an examination of how modular management of information technology (IT), project teams and front-line personnel in concert with knowledge management (KM) interventions influence the creation and development of dynamic capabilities at a large Asia-based call center. Our findings suggest that a full capitalization of the efficiencies created by modularity may be closely linked to the strategic sense making abilities of senior managers to assess the long-term business value of the dominant designs available in the market. Drawing on our analysis we build a modular management-KM-dynamic capabilities model, which highlights the evolution of three different levels of dynamic capabilities and also suggests an inherent complementarity between modular and integrated approaches. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "36f37bdf7da56a57f29d026dca77e494", "text": "Fifth generation (5G) systems are expected to introduce a revolution in the ICT domain with innovative networking features, such as device-to-device (D2D) communications. Accordingly, in-proximity devices directly communicate with each other, thus avoiding routing the data across the network infrastructure. This innovative technology is deemed to be also of high relevance to support effective heterogeneous objects interconnection within future IoT ecosystems. However, several open challenges shall be solved to achieve a seamless and reliable deployment of proximity-based communications. In this paper, we give a contribution to trust and security enhancements for opportunistic hop-by-hop forwarding schemes that rely on cellular D2D communications. To tackle the presence of malicious nodes in the network, reliability and reputation notions are introduced to model the level of trust among involved devices. To this aim, social-awareness of devices is accounted for, to better support D2D-based multihop content uploading. Our simulative results in small-scale IoT environments, demonstrate that data loss due to malicious nodes can be drastically reduced and gains in uploading time be reached with the proposed solution.", "title": "" }, { "docid": "615a24719fe4300ea8971e86014ed8fe", "text": "This paper presents a new code for the analysis of gamma spectra generated by an equipment for continuous measurement of gamma radioactivity in aerosols with paper filter. It is called pGamma and has been developed by the Nuclear Engineering Research Group at the Technical University of Catalonia - Barcelona Tech and by Raditel Serveis i Subministraments Tecnològics, Ltd. The code has been developed to identify the gamma emitters and to determine their activity concentration. It generates alarms depending on the activity of the emitters and elaborates reports. Therefore it includes a library with NORM and artificial emitters of interest. The code is being adapted to the monitors of the Environmental Radiological Surveillance Network of the local Catalan Government in Spain (Generalitat de Catalunya) and is used at three stations of the Network.", "title": "" } ]
scidocsrr
88786a5f653471b956befce547fc090e
Robotic calligraphy — Learning how to write single strokes of Chinese and Japanese characters
[ { "docid": "1f37b0d252de40c55eee0109c168983b", "text": "The algorithm may be programmed without multiplication or division instructions and is eficient with respect to speed of execution and memory utilization. This paper describes an algorithm for computer control of a type of digital plotter that is now in common use with digital computers .' The plotter under consideration is capable of executing, in response to an appropriate pulse, any one of the eight linear movements shown in Figure 1. Thus, the plotter can move linearly from a point on a mesh to any adjacent point on the mesh. A typical mesh size is 1/100th of an inch. The data to be plotted are expressed in an (x , y) rectangular coordinate system which has been scaled with respect to the mesh; i.e., the data points lie on mesh points and consequently have integral coordinates. It is assumed that the data include a sufficient number of appropriately selected points to produce a satisfactory representation of the curve by connecting the points with line segments, as illustrated in Figure 2. In Figure 3, the line segment connecting", "title": "" } ]
[ { "docid": "538f1b131a9803db07ab20f202ecc96e", "text": "In this paper, we propose a direction-of-arrival (DOA) estimation method by combining multiple signal classification (MUSIC) of two decomposed linear arrays for the corresponding coprime array signal processing. The title “DECOM” means that, first, the nonlinear coprime array needs to be DECOMposed into two linear arrays, and second, Doa Estimation is obtained by COmbining the MUSIC results of the linear arrays, where the existence and uniqueness of the solution are proved. To reduce the computational complexity of DECOM, we design a two-phase adaptive spectrum search scheme, which includes a coarse spectrum search phase and then a fine spectrum search phase. Extensive simulations have been conducted and the results show that the DECOM can achieve accurate DOA estimation under different SNR conditions.", "title": "" }, { "docid": "659736f536f23c030f6c9cd86df88d1d", "text": "Studies of human addicts and behavioural studies in rodent models of addiction indicate that key behavioural abnormalities associated with addiction are extremely long lived. So, chronic drug exposure causes stable changes in the brain at the molecular and cellular levels that underlie these behavioural abnormalities. There has been considerable progress in identifying the mechanisms that contribute to long-lived neural and behavioural plasticity related to addiction, including drug-induced changes in gene transcription, in RNA and protein processing, and in synaptic structure. Although the specific changes identified so far are not sufficiently long lasting to account for the nearly permanent changes in behaviour associated with addiction, recent work has pointed to the types of mechanism that could be involved.", "title": "" }, { "docid": "06e74a431b45aec75fb21066065e1353", "text": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities.", "title": "" }, { "docid": "ab32c8e5a5f8f7054d7a820514b1a84b", "text": "Descriptions and reviews for products abound on the web and characterise the corresponding products through their aspects. Extracting these aspects is essential to better understand these descriptions, e.g., for comparing or recommending products. Current pattern-based aspect extraction approaches focus on flat patterns extracting flat sets of adjective-noun pairs. Aspects also have crucial importance on sentiment classification in which sentiments are matched with aspect-level expressions. A preliminary step in both aspect extraction and aspect based sentiment analysis is to detect aspect terms and opinion targets. In this paper, we propose a sequential learning approach to extract aspect terms and opinion targets from opinionated documents. For the first time, we use semi-markov conditional random fields for this task and we incorporate word embeddings as features into the learning process. We get comparative results on the benchmark datasets for the subtask of aspect term extraction in SemEval-2014 Task 4 and the subtask of opinion target extraction in SemEval-2015 Task 12. Our results show that word embeddings improve the detection accuracy for aspect terms and opinion targets.", "title": "" }, { "docid": "f76088febc06463f01e98561d89d06cd", "text": "We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene’s artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display, and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.", "title": "" }, { "docid": "48931b870057884b8b1c679781e2adc9", "text": "Recommender systems have been researched extensively by the Technology Enhanced Learning (TEL) community during the last decade. By identifying suitable resources from a potentially overwhelming variety of choices, such systems offer a promising approach to facilitate both learning and teaching tasks. As learning is taking place in extremely diverse and rich environments, the incorporation of contextual information about the user in the recommendation process has attracted major interest. Such contextualization is researched as a paradigm for building intelligent systems that can better predict and anticipate the needs of users, and act more efficiently in response to their behavior. In this paper, we try to assess the degree to which current work in TEL recommender systems has achieved this, as well as outline areas in which further work is needed. First, we present a context framework that identifies relevant context dimensions for TEL applications. Then, we present an analysis of existing TEL recommender systems along these dimensions. Finally, based on our survey results, we outline topics on which further research is needed.", "title": "" }, { "docid": "ba5d0acb79bcd3fd1ffdb85ed345badc", "text": "Although the Transformer translation model (Vaswani et al., 2017) has achieved state-ofthe-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge. In this work, we extend the Transformer model with a new context encoder to represent document-level context, which is then incorporated into the original encoder and decoder. As large-scale document-level parallel corpora are usually not available, we introduce a two-step training method to take full advantage of abundant sentence-level parallel corpora and limited document-level parallel corpora. Experiments on the NIST ChineseEnglish datasets and the IWSLT FrenchEnglish datasets show that our approach improves over Transformer significantly. 1", "title": "" }, { "docid": "7c5f1b12f540c8320587ead7ed863ee5", "text": "This paper studies the non-fragile mixed H∞ and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.", "title": "" }, { "docid": "03ab3aeee4eb4505956a0c516cab26dd", "text": "The present study investigated the effect of 21 days of horizontal bed rest on cutaneous cold and warm sensitivity, and on behavioural temperature regulation. Healthy male subjects (N = 10) were accommodated in a hospital ward for the duration of the study and were under 24-h medical care. All activities (eating, drinking, hygiene, etc.) were conducted in the horizontal position. On the 1st and 22nd day of bed rest, cutaneous temperature sensitivity was tested by applying cold and warm stimuli of different magnitudes to the volar region of the forearm via a Peltier element thermode. Behavioural thermoregulation was assessed by having the subjects regulate the temperature of the water within a water-perfused suit (T wps) they were wearing. A control unit established a sinusoidal change in T wps, such that it varied from 27 to 42°C. The subjects could alter the direction of the change of T wps, when they perceived it as thermally uncomfortable. The magnitude of the oscillations towards the end of the trial was assumed to represent the upper and lower boundaries of the thermal comfort zone. The cutaneous threshold for detecting cold stimulus decreased (P < 0.05) from 1.6 (1.0)°C on day 1 to 1.0 (0.3)°C on day 22. No effect was observed on the ability to detect warm stimuli or on the regulated T wps. We conclude that although cold sensitivity increased after bed rest, it was not of sufficient magnitude to cause any alteration in behavioural thermoregulatory responses.", "title": "" }, { "docid": "db5ff75a7966ec6c1503764d7e510108", "text": "Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.", "title": "" }, { "docid": "58f1ba92eb199f4d105bf262b30dbbc5", "text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.", "title": "" }, { "docid": "c526e32c9c8b62877cb86bc5b097e2cf", "text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.", "title": "" }, { "docid": "c2c994664e3aecff1ccb8d8feaf860e9", "text": "Hazard zones associated with LNG handling activities have been a major point of contention in recent terminal development applications. Debate has reflected primarily worst case scenarios and discussion of these. This paper presents results from a maximum credible event approach. A comparison of results from several models either run by the authors or reported in the literature is presented. While larger scale experimental trials will be necessary to reduce the uncertainty, in the interim a set of base cases are suggested covering both existing trials and credible and worst case events is proposed. This can assist users to assess the degree of conservatism present in quoted modeling approaches and model selections.", "title": "" }, { "docid": "efde92d1e86ff0b5f91b006521935621", "text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.", "title": "" }, { "docid": "f945b645e492e2b5c6c2d2d4ea6c57ae", "text": "PURPOSE\nThe aim of this review was to look at relevant data and research on the evolution of ventral hernia repair.\n\n\nMETHODS\nResources including books, research, guidelines, and online articles were reviewed to provide a concise history of and data on the evolution of ventral hernia repair.\n\n\nRESULTS\nThe evolution of ventral hernia repair has a very long history, from the recognition of ventral hernias to its current management, with significant contributions from different authors. Advances in surgery have led to more cases of ventral hernia formation, and this has required the development of new techniques and new materials for ventral hernia management. The biocompatibility of prosthetic materials has been important in mesh development. The functional anatomy and physiology of the abdominal wall has become important in ventral hernia management. New techniques in abdominal wall closure may prevent or reduce the incidence of ventral hernia in the future.\n\n\nCONCLUSION\nThe management of ventral hernia is continuously evolving as it responds to new demands and new technology in surgery.", "title": "" }, { "docid": "5508603a802abb9ab0203412b396b7bc", "text": "We present an optimal algorithm for informative path planning (IPP), using a branch and bound method inspired by feature selection algorithms. The algorithm uses the monotonicity of the objective function to give an objective function-dependent speedup versus brute force search. We present results which suggest that when maximizing variance reduction in a Gaussian process model, the speedup is significant.", "title": "" }, { "docid": "5744e87741b6154b333e0f24bb17f0ea", "text": "We describe two new related resources that facilitate modelling of general knowledge reasoning in 4th grade science exams. The first is a collection of curated facts in the form of tables, and the second is a large set of crowd-sourced multiple-choice questions covering the facts in the tables. Through the setup of the crowd-sourced annotation task we obtain implicit alignment information between questions and tables. We envisage that the resources will be useful not only to researchers working on question answering, but also to people investigating a diverse range of other applications such as information extraction, question parsing, answer type identification, and lexical semantic modelling.", "title": "" }, { "docid": "0e5a11ef4daeb969702e40ea0c50d7f3", "text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).", "title": "" }, { "docid": "44c66a2654fdc7ab72dabaa8e31f0e99", "text": "The availability of new generation multispectral sensors of the Landsat 8 and Sentinel-2 satellite platforms offers unprecedented opportunities for long-term high-frequency monitoring applications. The present letter aims at highlighting some potentials and challenges deriving from the spectral and spatial characteristics of the two instruments. Some comparisons between corresponding bands and band combinations were performed on the basis of different datasets: the first consists of a set of simulated images derived from a hyperspectral Hyperion image, the other five consist instead of pairs of real images (Landsat 8 and Sentinel-2A) acquired on the same date, over five areas. Results point out that in most cases the two sensors can be well combined; however, some issues arise regarding near-infrared bands when Sentinel-2 data are combined with both Landsat 8 and older Landsat images.", "title": "" }, { "docid": "b229aa8b39b3df3fec941ce4791a2fe9", "text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.", "title": "" } ]
scidocsrr
ba3b0555c1640c32c281df578e28e0ed
Comparative study for various DNA based steganography techniques with the essential conclusions about the future research
[ { "docid": "66b909528a566662667a3d8c7c749bf4", "text": "There exists a big demand for innovative secure electronic communications while the expertise level of attackers increases rapidly and that causes even bigger demands and needs for an extreme secure connection. An ideal security protocol should always be protecting the security of connections in many aspects, and leaves no trapdoor for the attackers. Nowadays, one of the popular cryptography protocols is hybrid cryptosystem that uses private and public key cryptography to change secret message. In available cryptography protocol attackers are always aware of transmission of sensitive data. Even non-interested attackers can get interested to break the ciphertext out of curiosity and challenge, when suddenly catches some scrambled data over the network. First of all, we try to explain the roles of innovative approaches in cryptography. After that we discuss about the disadvantages of public key cryptography to exchange secret key. Furthermore, DNA steganography is explained as an innovative paradigm to diminish the usage of public cryptography to exchange session key. In this protocol, session key between a sender and receiver is hidden by novel DNA data hiding technique. Consequently, the attackers are not aware of transmission of session key through unsecure channel. Finally, the strength point of the DNA steganography is discussed.", "title": "" } ]
[ { "docid": "f9eed4f99d70c51dc626a61724540d3c", "text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.", "title": "" }, { "docid": "7e683f15580e77b1e207731bb73b8107", "text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "35dbef4cc4b8588d451008b8156f326f", "text": "Raman spectroscopy is a powerful tool for studying the biochemical composition of tissues and cells in the human body. We describe the initial results of a feasibility study to design and build a miniature, fiber optic probe incorporated into a standard hypodermic needle. This probe is intended for use in optical biopsies of solid tissues to provide valuable information of disease type, such as in the lymphatic system, breast, or prostate, or of such tissue types as muscle, fat, or spinal, when identifying a critical injection site. The optical design and fabrication of this probe is described, and example spectra of various ex vivo samples are shown.", "title": "" }, { "docid": "5374ed153eb37e5680f1500fea5b9dbe", "text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.", "title": "" }, { "docid": "638cc32b94c4e44a1e185fdbdc6646f5", "text": "Object detection and recognition is an important task in many computer vision applications. In this paper an Android application was developed using Eclipse IDE and OpenCV3 Library. This application is able to detect objects in an image that is loaded from the mobile gallery, based on its color, shape, or local features. The image is processed in the HSV color domain for better color detection. Circular shapes are detected using Circular Hough Transform and other shapes are detected using Douglas-Peucker algorithm. BRISK (binary robust invariant scalable keypoints) local features were applied in the developed Android application for matching an object image in another scene image. The steps of the proposed detection algorithms are described, and the interfaces of the application are illustrated. The application is ported and tested on Galaxy S3, S6, and Note1 Smartphones. Based on the experimental results, the application is capable of detecting eleven different colors, detecting two dimensional geometrical shapes including circles, rectangles, triangles, and squares, and correctly match local features of object and scene images for different conditions. The application could be used as a standalone application, or as a part of another application such as Robot systems, traffic systems, e-learning applications, information retrieval and many others.", "title": "" }, { "docid": "a180735616ded05900cda77be19fc787", "text": "Economically sustainable software systems must be able to cost-effectively evolve in response to changes in their environment, their usage profile, and business demands. However, in many software development projects, sustainability is treated as an afterthought, as developers are driven by time-to-market pressure and are often not educated to apply sustainability-improving techniques. While software engineering research and practice has suggested a large amount of such techniques, a holistic overview is missing and the effectiveness of individual techniques is often not sufficiently validated. On this behalf we created a catalog of “software sustainability guidelines” to support project managers, software architects, and developers during system design, development, operation, and maintenance. This paper describes how we derived these guidelines and how we applied selected techniques from them in two industrial case studies. We report several lessons learned about sustainable software development.", "title": "" }, { "docid": "f175bfcd43f1c11c6b538022e2db1281", "text": "The D-AMP methodology, recently proposed by Metzler, Maleki, and Baraniuk, allows one to plug in sophisticated denoisers like BM3D into the AMP algorithm to achieve state-of-the-art compressive image recovery. But AMP diverges with small deviations from the i.i.d.-Gaussian assumption on the measurement matrix. Recently, the VAMP algorithm has been proposed to fix this problem. In this work, we show that the benefits of VAMP extend to D-VAMP. Consider the problem of recovering a (vectorized) image x0 ∈ R from compressive (i.e., M ≪ N ) noisy linear measurements y = Φx0 +w ∈ R M , (1) known as “compressive imaging.” The “sparse” approach to this problem exploits sparsity in the coefficients v0 , Ψx0 ∈ R N of an orthonormal wavelet transform Ψ. The idea is to rewrite (1) as y = Av0 +w for A , ΦΨ , (2) recover an estimate v̂ of v0 from y, and then construct the image estimate as x̂ = Ψ v̂. Although many algorithms have been proposed for sparse recovery of v0, a notable one is the approximate message passing (AMP) algorithm from [1]. It is computationally efficient (i.e., one multiplication by A and A per iteration and relatively few iterations) and its performance, when M and N are large and Φ is zero-mean i.i.d. Gaussian, is rigorously characterized by a scalar state evolution. A variant called “denoising-based AMP” (D-AMP) was recently proposed [2] for direct recovery of x0 from (1). It exploits the fact that, at iteration t, AMP constructs a pseudo-measurement of the form v0 + N (0, σ t I) with known σ t , which is amenable to any image denoising algorithm. By plugging in a state-of-the-art image denoiser like BM3D [3], D-AMP yields state-of-the-art compressive imaging. AMP and D-AMP, however, have a serious weakness: they diverge under small deviations from the zero-mean i.i.d. Gaussian assumption on Φ, such as non-zero mean or mild ill-conditioning. A robust alternative called “vector AMP” (VAMP) was recently proposed [4]. VAMP has similar complexity to AMP and a rigorous state evolution November 7, 2016 DRAFT", "title": "" }, { "docid": "29ce9730d55b55b84e195983a8506e5c", "text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.", "title": "" }, { "docid": "501760c68ed75ed288749e9b4068234f", "text": "This research investigated impulse buying as resulting from the depletion of a common—but limited—resource that governs self-control. In three investigations, participants’ self-regulatory resources were depleted or not; later, impulsive spending responses were measured. Participants whose resources were depleted, relative to participants whose resources were not depleted, felt stronger urges to buy, were willing to spend more, and actually did spend more money in unanticipated buying situations. Participants having depleted resources reported being influenced equally by affective and cognitive factors and purchased products that were high on each factor at equal rates. Hence, self-regulatory resource availability predicts whether people can resist impulse buying temptations.", "title": "" }, { "docid": "68b2608c91525f3147f74b41612a9064", "text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.", "title": "" }, { "docid": "f8821f651731943ce1652bc8a1d2c0d6", "text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …", "title": "" }, { "docid": "d4d0818e22b736f04acc53cdfcebb2f8", "text": "Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world.", "title": "" }, { "docid": "7aca3e7f9409fa1381a309d304eb898d", "text": "The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.", "title": "" }, { "docid": "1db6ea040880ceeb57737a5054206127", "text": "Several studies regarding security testing for corporate environments, networks, and systems were developed in the past years. Therefore, to understand how methodologies and tools for security testing have evolved is an important task. One of the reasons for this evolution is due to penetration test, also known as Pentest. The main objective of this work is to provide an overview on Pentest, showing its application scenarios, models, methodologies, and tools from published papers. Thereby, this work may help researchers and people that work with security to understand the aspects and existing solutions related to Pentest. A systematic mapping study was conducted, with an initial gathering of 1145 papers, represented by 1090 distinct papers that have been evaluated. At the end, 54 primary studies were selected to be analyzed in a quantitative and qualitative way. As a result, we classified the tools and models that are used on Pentest. We also show the main scenarios in which these tools and methodologies are applied to. Finally, we present some open issues and research opportunities on Pentest.", "title": "" }, { "docid": "8a679c93185332398c5261ddcfe81e84", "text": "We discuss the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of an infinite-horizon discounted Markov chain, using a function approximator involving linear combinations of fixed basis functions. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an ergodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. In addition to proving new and stronger results than those previously available, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporal-difference learning. Finally, we prove that on-line updates, based on entire trajectories of the Markov chain, are in a certain sense necessary for convergence. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporal-difference learning.", "title": "" }, { "docid": "34690f455f9e539b06006f30dd3e512b", "text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.", "title": "" }, { "docid": "7313ab8f065b8cc167aa2d4cd999eae3", "text": "LossCalcTM version 2.0 is the Moody's KMV model to predict loss given default (LGD) or (1 recovery rate). Lenders and investors use LGD to estimate future credit losses. LossCalc is a robust and validated model of LGD for loans, bonds, and preferred stocks for the US, Canada, the UK, Continental Europe, Asia, and Latin America. It projects LGD for defaults occurring immediately and for defaults that may occur in one year. LossCalc is a statistical model that incorporates information at different levels: collateral, instrument, firm, industry, country, and the macroeconomy to predict LGD. It significantly improves on the use of historical recovery averages to predict LGD, helping institutions to better price and manage credit risk. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds, and preferred stock from 1981-2004. This dataset includes over 1,424 defaults of both public and private firms—both rated and unrated instruments—in all industries. LossCalc will help institutions better manage their credit risk and can play a critical role in meeting the Basel II requirements on advanced Internal Ratings Based Approach. This paper describes Moody's KMV LossCalc, its predictive factors, the modeling approach, and its out of-time and out of-sample model validation. AUTHORS Greg M. Gupton Roger M. Stein", "title": "" }, { "docid": "12bdec4e6f70a7fe2bd4c750752287c3", "text": "Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data.", "title": "" }, { "docid": "f530b8b9fc2565687ccc28ba6a3a72ca", "text": "Design of an electric machine such as the axial flux permanent magnet synchronous motor (AFPMSM) requires a 3-D finite-element method (FEM) analysis. The AFPMSM with a 3-D FEM model involves too much time and effort to analyze. To deal with this problem, we apply a surrogate assisted multi-objective optimization (SAMOO) algorithm that can realize an accurate and well-distributed Pareto front set with a few number of function calls, and considers various design variables in the motor design process. The superior performance of the SAMOO is verified by comparing it with conventional multi-objective optimization algorithms in a test function. Finally, the optimal design result of the AFPMSM for the electric bicycle is obtained by using the SAMOO algorithm.", "title": "" }, { "docid": "fedcb2bd51b9fd147681ae23e03c7336", "text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.", "title": "" } ]
scidocsrr
a6b7d71628f57d0a64e68969f9afca56
Benchmarking Graph Databases on the Problem of Community Detection
[ { "docid": "164b61b3c8e29e19cd6c7be2abf046db", "text": "In recent years, more and more companies provide services that can not be anymore achieved efficiently using relational databases. As such, these companies are forced to use alternative database models such as XML databases, object-oriented databases, document-oriented databases and, more recently graph databases. Graph databases only exist for a few years. Although there have been some comparison attempts, they are mostly focused on certain aspects only. In this paper, we present a distributed graph database comparison framework and the results we obtained by comparing four important players in the graph databases market: Neo4j, Orient DB, Titan and DEX.", "title": "" }, { "docid": "a69220d5cf0145eb6e2e8b13252e6eea", "text": "Database benchmarks are an important tool for database researchers and practitioners that ease the process of making informed comparisons between different database hardware, software and configurations. Large scale web services such as social networks are a major and growing database application area, but currently there are few benchmarks that accurately model web service workloads.\n In this paper we present a new synthetic benchmark called LinkBench. LinkBench is based on traces from production databases that store \"social graph\" data at Facebook, a major social network. We characterize the data and query workload in many dimensions, and use the insights gained to construct a realistic synthetic benchmark. LinkBench provides a realistic and challenging test for persistent storage of social and web service data, filling a gap in the available tools for researchers, developers and administrators.", "title": "" } ]
[ { "docid": "7bd54a65ce90f0d935857ba0fcb457a5", "text": "Estimating energy costs for an industrial process can be computationally intensive and time consuming, especially as it can involve data collection from different (distributed) monitoring sensors. Industrial processes have an implicit complexity involving the use of multiple appliances (devices/ sub-systems) attached to operation schedules, electrical capacity and optimisation setpoints which need to be determined for achieving operational cost objectives. Addressing the complexity associated with an industrial workflow (i.e. range and type of tasks) leads to increased requirements on the computing infrastructure. Such requirements can include achieving execution performance targets per processing unit within a particular size of infrastructure i.e. processing & data storage nodes to complete a computational analysis task within a specific deadline. The use of ensemblebased edge processing is identifed to meet these Quality of Service targets, whereby edge nodes can be used to distribute the computational load across a distributed infrastructure. Rather than relying on a single edge node, we propose the combined use of an ensemble of such nodes to overcome processing, data privacy/ security and reliability constraints. We propose an ensemble-based network processing model to facilitate distributed execution of energy simulations tasks within an industrial process. A scenario based on energy profiling within a fisheries plant is used to illustrate the use of an edge ensemble. The suggested approach is however general in scope and can be used in other similar application domains.", "title": "" }, { "docid": "e409a2a23fb0dbeb0aa57c89a10d61b1", "text": "Text is still the most prevalent Internet media type. Examples of this include popular social networking applications such as Twitter, Craigslist, Facebook, etc. Other web applications such as e-mail, blog, chat rooms, etc. are also mostly text based. A question we address in this paper that deals with text based Internet forensics is the following: given a short text document, can we identify if the author is a man or a woman? This question is motivated by recent events where people faked their gender on the Internet. Note that this is different from the authorship attribution problem. In this paper we investigate author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications. Fundamental questions we ask are: do men and women inherently use different classes of language styles? If this is true, what are good linguistic features that indicate gender? Based on research in human psychology, we propose 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem. Note that identifying the correct set of features that indicate gender is an open research problem. Three machine learning algorithms (support vector machine, Bayesian logistic regression and AdaBoost decision tree) are then designed for gender identification based on the proposed features. Extensive experiments on large text corpora (Reuters Corpus Volume 1 newsgroup data and Enron e-mail data) indicate an accuracy up to 85.1% in identifying the gender. Experiments also indicate that function words, word-based features and structural features are significant gender discriminators. a 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "80a9489262ee8d94d64dd8e475c060a3", "text": "The effects of social-cognitive variables on preventive nutrition and behavioral intentions were studied in 580 adults at 2 points in time. The authors hypothesized that optimistic self-beliefs operate in 2 phases and made a distinction between action self-efficacy (preintention) and coping self-efficacy (postintention). Risk perceptions, outcome expectancies, and action self-efficacy were specified as predictors of the intention at Wave 1. Behavioral intention and coping self-efficacy served as mediators linking the 3 predictors with low-fat and high-fiber dietary intake 6 months later at Wave 2. Covariance structure analysis yielded a good model fit for the total sample and 6 subsamples created by a median split of 3 moderators: gender, age, and body weight. Parameter estimates differed between samples; the importance of perceived self-efficacy increased with age and weight.", "title": "" }, { "docid": "68a1c87e9931bd2a0f9424de451ebfac", "text": "Recent research in sound simulation has focused on either sound synthesis or sound propagation, and many standalone algorithms have been developed for each domain. We present a novel technique for coupling sound synthesis with sound propagation to automatically generate realistic aural content for virtual environments. Our approach can generate sounds from rigid-bodies based on the vibration modes and radiation coefficients represented by the single-point multipole expansion. We present a mode-adaptive propagation algorithm that uses a perceptual Hankel function approximation technique to achieve interactive runtime performance. The overall approach allows for high degrees of dynamism - it can support dynamic sources, dynamic listeners, and dynamic directivity simultaneously. We have integrated our system with the Unity game engine and demonstrate the effectiveness of this fully-automatic technique for audio content creation in complex indoor and outdoor scenes. We conducted a preliminary, online user-study to evaluate whether our Hankel function approximation causes any perceptible loss of audio quality. The results indicate that the subjects were unable to distinguish between the audio rendered using the approximate function and audio rendered using the full Hankel function in the Cathedral, Tuscany, and the Game benchmarks.", "title": "" }, { "docid": "ecad37ad1097369fd03f0decff2d23dc", "text": "The unique musculoskeletal structure of the human hand brings in wider dexterous capabilities to grasp and manipulate a repertoire of objects than the non-human primates. It has been widely accepted that the orientation and the position of the thumb plays an important role in this characteristic behavior. There have been numerous attempts to develop anthropomorphic robotic hands with varying levels of success. Nevertheless, manipulation ability in those hands is to be ameliorated even though they can grasp objects successfully. An appropriate model of the thumb is important to manipulate the objects against the fingers and to maintain the stability. Modeling these complex interactions about the mechanical axes of the joints and how to incorporate these joints in robotic thumbs is a challenging task. This article presents a review of the biomechanics of the human thumb and the robotic thumb designs to identify opportunities for future anthropomorphic robotic hands.", "title": "" }, { "docid": "733885d6ec4ac2f7bce950fb7104773f", "text": "This paper presents a neuro-fuzzy classifer for activity recognition using one triaxial accelerometer and feature reduction approaches. We use a triaxial accelerometer to acquire subjects’ acceleration data and train the neurofuzzy classifier to distinguish different activities/movements. To construct the neuro-fuzzy classifier, a modified mapping-constrained agglomerative clustering algorithm is devised to reveal a compact data configuration from the acceleration data. In addition, we investigate two different feature reduction methods, a feature subset selection and linear discriminate analysis. These two methods are used to determine the significant feature subsets and retain the characteristics of the data distribution in the feature space for training the neuro-fuzzy classifier. Experimental results have successfully validated the effectiveness of the proposed classifier.", "title": "" }, { "docid": "3122b61a0d48888dff488cc41564c820", "text": "In this study, the ensemble classifier presented by Caruana, Niculescu-Mizil, Crew & Ksikes (2004) is investigated. Their ensemble approach generates thousands of models using a variety of machine learning algorithms and uses a forward stepwise selection to build robust ensembles that can be optimised to an arbitrary metric. On average, the resulting ensemble out-performs the best individual machine learning models. The classifier is implemented in the WEKA machine learning environment, which allows the results presented by the original paper to be validated and the classifier to be extended to multi-class problem domains. The behaviour of different ensemble building strategies is also investigated. The classifier is then applied to the spam filtering domain, where it is tested on three different corpora in an attempt to provide a realistic evaluation of the system. It records similar performance levels to that seen in other problem domains and out-performs individual models and the naive Bayesian filtering technique regularly used by commercial spam filtering solutions. Caruana et al.’s (2004) classifier will typically outperform the best known models in a variety of problems.", "title": "" }, { "docid": "efb81d85abcf62f4f3747a58154c5144", "text": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion. Our code is available at https://github.com/sergeytulyakov/mocogan.", "title": "" }, { "docid": "12a3e52c3af78663698e7b907f6ee912", "text": "A novel graph-based language-independent stemming algorithm suitable for information retrieval is proposed in this article. The main features of the algorithm are retrieval effectiveness, generality, and computational efficiency. We test our approach on seven languages (using collections from the TREC, CLEF, and FIRE evaluation platforms) of varying morphological complexity. Significant performance improvement over plain word-based retrieval, three other language-independent morphological normalizers, as well as rule-based stemmers is demonstrated.", "title": "" }, { "docid": "e82cd7c22668b0c9ed62b4afdf49d1f4", "text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.", "title": "" }, { "docid": "4a37742db1c55b877733f53ea95ee3c6", "text": "This paper presents an overview of an intelligence platform we have built to address threat hunting and incident investigation use-cases in the cyber security domain. Specifically, we focus on User and Entity Behavior Analytics (UEBA) modules that track and monitor behaviors of users, IP addresses and devices in an enterprise. Anomalous behavior is automatically detected using machine learning algorithms based on Singular Values Decomposition (SVD). Such anomalous behavior indicative of potentially malicious activity is alerted to analysts with relevant contextual information for further investigation and action. We provide a detailed description of the models, algorithms and implementation underlying the module and demonstrate the functionality with empirical examples.", "title": "" }, { "docid": "1bbb8acdc8b5573647708da7ff0252b6", "text": "I have a ton of questions about layout, design how formal to be in my writing, and Nicholas J. Higham. Handbook of Writing for the Mathematical Sciences. Nick J Higham School of Mathematics and Manchester Institute for Mathematical of numerical algorithms Handbook of writing for the mathematical sciences. (1) Nicholas J. Higham. Handbook of writing for the mathematical sciences. SIAM, 1998. (2) Leslie Lamport. LATEX Users Guide & Reference Manual.", "title": "" }, { "docid": "58fda5b08ffe26440b173f363ca36292", "text": "The dependence on information technology became critical and IT infrastructure, critical data, intangible intellectual property are vulnerable to threats and attacks. Organizations install Intrusion Detection Systems (IDS) to alert suspicious traffic or activity. IDS generate a large number of alerts and most of them are false positive as the behavior construe for partial attack pattern or lack of environment knowledge. Monitoring and identifying risky alerts is a major concern to security administrator. The present work is to design an operational model for minimization of false positive alarms, including recurring alarms by security administrator. The architecture, design and performance of model in minimization of false positives in IDS are explored and the experimental results are presented with reference to lab environment.", "title": "" }, { "docid": "45f8c4e3409f8b27221e45e6c3485641", "text": "In recent years, time information is more and more important in collaborative filtering (CF) based recommender system because many systems have collected rating data for a long time, and time effects in user preference is stronger. In this paper, we focus on modeling time effects in CF and analyze how temporal features influence CF. There are four main types of time effects in CF: (1) time bias, the interest of whole society changes with time; (2) user bias shifting, a user may change his/her rating habit over time; (3) item bias shifting, the popularity of items changes with time; (4) user preference shifting, a user may change his/her attitude to some types of items. In this work, these four time effects are used by factorized model, which is called TimeSVD. Moreover, many other time effects are used by simple methods. Our time-dependent models are tested on Netflix data from Nov. 1999 to Dec. 2005. Experimental results show that prediction accuracy in CF can be improved significantly by using time information.", "title": "" }, { "docid": "911545273424b27832310d9869ccb55f", "text": "Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as nonmaximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes1.", "title": "" }, { "docid": "c166a5ac33c4bf0ffe055578f016e72f", "text": "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).", "title": "" }, { "docid": "7e873e837ccc1696eb78639e03d02cae", "text": "Steering is an integral component of adaptive locomotor behavior. Along with reorientation of gaze and body in the direction of intended travel, body center of mass must be controlled in the mediolateral plane. In this study we examine how these subtasks are sequenced when steering is planned early or initiated under time constraints. Whole body kinematics were monitored as individuals were required to change their direction of travel by varying amounts when visually cued either at the beginning of the walk or one stride before. The analyses focused on the transition stride from one travel direction to another. Timing of changes (with respect to first right foot contact) in trunk roll angle, head and trunk yaw angle, and right foot displacement in the mediolateral plane were analyzed. The magnitude of these measures along with right and left foot placement at the beginning and right foot placement at the end of the transition stride were also analyzed. The results show the CNS uses two mechanisms, foot placement and trunk roll motion (piking action about the hip joint in the frontal plane), to move the center of mass towards the new direction of travel in the transition stride, preferring to use the first option when planning can be done early. Control of body center of mass precedes all other changes and is followed by initiation of head reorientation. Only then is the rest of the body reorientation initiated.", "title": "" }, { "docid": "b3da0c6745883ae3da10e341abc3bf4d", "text": "Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal's location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.", "title": "" }, { "docid": "16b8a948e76a04b1703646d5e6111afe", "text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.", "title": "" }, { "docid": "2f11cc1b08083a999d5624a9600deee9", "text": "Residual Network (ResNet) is the state-of-the-art architecture that realizes successful training of really deep neural network. It is also known that good weight initialization of neural network avoids problem of vanishing/exploding gradients. In this paper, simplified models of ResNets are analyzed. We argue that goodness of ResNet is correlated with the fact that ResNets are relatively insensitive to choice of initial weights. We also demonstrate how batch normalization improves backpropagation of deep ResNets without tuning initial values of weights.", "title": "" } ]
scidocsrr
68f2bf965191c6c8fede96c83c3894a6
Interpretable VAEs for nonlinear group factor analysis
[ { "docid": "db75809bcc029a4105dc12c63e2eca76", "text": "Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.", "title": "" } ]
[ { "docid": "732e72f152075d47f6473910a2e98e9f", "text": "In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel’s new formal specification language. The key features of FTL are as follows: it is a l inear temporal logic, based on Pnueli’s LTL, it is based on a rich set of logic al and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables th user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to expre ss roperties about the past, and it includes constructs that enable the user to mode l multiple clock and reset signals, which is useful in the verification of hardwar e design.", "title": "" }, { "docid": "6ba537ef9dd306a3caaba63c2b48c222", "text": "A lumped-element circuit is proposed to model a coplanar waveguide (CPW) interdigital capacitor (IDC). Closed-form expressions suitable for CAD purposes are given for each element in the circuit. The obtained results for the series capacitance are in good agreement with those available in the literature. In addition, the scattering parameters obtained from the circuit model are compared with those obtained using the full-wave method of moments (MoM) and good agreement is obtained. Moreover, a multilayer feed-forward artificial neural network (ANN) is developed to model the capacitance of the CPW IDC. It is shown that the developed ANN has successfully learned the required task of evaluating the capacitance of the IDC. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15: 551–559, 2005.", "title": "" }, { "docid": "03fb57d2810ed42f7fe57f688db6fd57", "text": "This paper reviews some of the accomplishments in the field of robot dynamics research, from the development of the recursive Newton-Euler algorithm to the present day. Equations and algorithms are given for the most important dynamics computations, expressed in a common notation to facilitate their presentation and comparison.", "title": "" }, { "docid": "c2081b44d63490f2967517558065bdf0", "text": "The add-on battery pack in plug-in hybrid electric vehicles can be charged from an AC outlet, feed power back to the grid, provide power for electric traction, and capture regenerative energy when braking. Conventionally, three-stage bidirectional converter interfaces are used to fulfil these functions. In this paper, a single stage integrated converter is proposed based on direct AC/DC conversion theory. The proposed converter eliminates the full bridge rectifier, reduces the number of semiconductor switches and high current inductors, and improves the conversion efficiency.", "title": "" }, { "docid": "b8274589a145a94e19329b2640a08c17", "text": "Since 2004, many nations have started issuing “e-passports” containing an RFID tag that, when powered, broadcast information. It is claimed that these passports are more secure and that our data will be protected from any possible unauthorised attempts to read it. In this paper we show that there is a flaw in one of the passport’s protocols that makes it possible to trace the movements of a particular passport, without having to break the passport’s cryptographic key. All an attacker has to do is to record one session between the passport and a legitimate reader, then by replaying a particular message, the attacker can distinguish that passport from any other. We have implemented our attack and tested it successfully against passports issued by a range of nations.", "title": "" }, { "docid": "6ab38099b989f1d9bdc504c9b50b6bbe", "text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.", "title": "" }, { "docid": "3d3c04826eafd366401231aba984419b", "text": "INTRODUCTION\nDespite the known advantages of objective physical activity monitors (e.g., accelerometers), these devices have high rates of non-wear, which leads to missing data. Objective activity monitors are also unable to capture valuable contextual information about behavior. Adolescents recruited into physical activity surveillance and intervention studies will increasingly have smartphones, which are miniature computers with built-in motion sensors.\n\n\nMETHODS\nThis paper describes the design and development of a smartphone application (\"app\") called Mobile Teen that combines objective and self-report assessment strategies through (1) sensor-informed context-sensitive ecological momentary assessment (CS-EMA) and (2) sensor-assisted end-of-day recall.\n\n\nRESULTS\nThe Mobile Teen app uses the mobile phone's built-in motion sensor to automatically detect likely bouts of phone non-wear, sedentary behavior, and physical activity. The app then uses transitions between these inferred states to trigger CS-EMA self-report surveys measuring the type, purpose, and context of activity in real-time. The end of the day recall component of the Mobile Teen app allows users to interactively review and label their own physical activity data each evening using visual cues from automatically detected major activity transitions from the phone's built-in motion sensors. Major activity transitions are identified by the app, which cues the user to label that \"chunk,\" or period, of time using activity categories.\n\n\nCONCLUSION\nSensor-driven CS-EMA and end-of-day recall smartphone apps can be used to augment physical activity data collected by objective activity monitors, filling in gaps during non-wear bouts and providing additional real-time data on environmental, social, and emotional correlates of behavior. Smartphone apps such as these have potential for affordable deployment in large-scale epidemiological and intervention studies.", "title": "" }, { "docid": "076ad699191bd3df87443f427268222a", "text": "Robotic systems for disease detection in greenhouses are expected to improve disease control, increase yield, and reduce pesticide application. We present a robotic detection system for combined detection of two major threats of greenhouse bell peppers: Powdery mildew (PM) and Tomato spotted wilt virus (TSWV). The system is based on a manipulator, which facilitates reaching multiple detection poses. Several detection algorithms are developed based on principal component analysis (PCA) and the coefficient of variation (CV). Tests ascertain the system can successfully detect the plant and reach the detection pose required for PM (along the side of the plant), yet it has difficulties in reaching the TSWV detection pose (above the plant). Increasing manipulator work-volume is expected to solve this issue. For TSWV, PCA-based classification with leaf vein removal, achieved the highest classification accuracy (90%) while the accuracy of the CV methods was also high (85% and 87%). For PM, PCA-based pixel-level classification was high (95.2%) while leaf condition classification accuracy was low (64.3%) since it was determined based on the upper side of the leaf while disease symptoms start on its lower side. Exposure of the lower side of the leaf during detection is expected to improve PM condition detection.", "title": "" }, { "docid": "77362cc72d7a09dbbb0f067c11fe8087", "text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.", "title": "" }, { "docid": "883be979cd5e7d43ded67da1a40427ce", "text": "This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "ec3542685d1b6e71e523cdcafc59d849", "text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.", "title": "" }, { "docid": "d1eb1b18105d79c44dc1b6b3b2c06ee2", "text": "An implementation of high speed AES algorithm based on FPGA is presented in this paper in order to improve the safety of data in transmission. The mathematic principle, encryption process and logic structure of AES algorithm are introduced. So as to reach the porpose of improving the system computing speed, the pipelining and papallel processing methods were used. The simulation results show that the high-speed AES encryption algorithm implemented correctly. Using the method of AES encryption the data could be protected effectively.", "title": "" }, { "docid": "42961b66e41a155edb74cc4ab5493c9c", "text": "OBJECTIVE\nTo determine the preventive effect of manual lymph drainage on the development of lymphoedema related to breast cancer.\n\n\nDESIGN\nRandomised single blinded controlled trial.\n\n\nSETTING\nUniversity Hospitals Leuven, Leuven, Belgium.\n\n\nPARTICIPANTS\n160 consecutive patients with breast cancer and unilateral axillary lymph node dissection. The randomisation was stratified for body mass index (BMI) and axillary irradiation and treatment allocation was concealed. Randomisation was done independently from recruitment and treatment. Baseline characteristics were comparable between the groups.\n\n\nINTERVENTION\nFor six months the intervention group (n = 79) performed a treatment programme consisting of guidelines about the prevention of lymphoedema, exercise therapy, and manual lymph drainage. The control group (n = 81) performed the same programme without manual lymph drainage.\n\n\nMAIN OUTCOME MEASURES\nCumulative incidence of arm lymphoedema and time to develop arm lymphoedema, defined as an increase in arm volume of 200 mL or more in the value before surgery.\n\n\nRESULTS\nFour patients in the intervention group and two in the control group were lost to follow-up. At 12 months after surgery, the cumulative incidence rate for arm lymphoedema was comparable between the intervention group (24%) and control group (19%) (odds ratio 1.3, 95% confidence interval 0.6 to 2.9; P = 0.45). The time to develop arm lymphoedema was comparable between the two group during the first year after surgery (hazard ratio 1.3, 0.6 to 2.5; P = 0.49). The sample size calculation was based on a presumed odds ratio of 0.3, which is not included in the 95% confidence interval. This odds ratio was calculated as (presumed cumulative incidence of lymphoedema in intervention group/presumed cumulative incidence of no lymphoedema in intervention group)×(presumed cumulative incidence of no lymphoedema in control group/presumed cumulative incidence of lymphoedema in control group) or (10/90)×(70/30).\n\n\nCONCLUSION\nManual lymph drainage in addition to guidelines and exercise therapy after axillary lymph node dissection for breast cancer is unlikely to have a medium to large effect in reducing the incidence of arm lymphoedema in the short term. Trial registration Netherlands Trial Register No NTR 1055.", "title": "" }, { "docid": "43b2721bb2fb4e50e855c69ea147ffd1", "text": "Bladder tumours represent a heterogeneous group of cancers. The natural history of these bladder cancers is that of recurrence of disease and progression to higher grade and stage disease. Furthermore, recurrence and progression rates of superficial bladder cancer vary according to several tumour characteristics, mainly tumour grade and stage. The most recent World Health Organization (WHO) classification of tumours of the urinary system includes urothelial flat lesions: flat hyperplasia, dysplasia and carcinoma in situ. The papillary lesions are broadly subdivided into benign (papilloma and inverted papilloma), papillary urothelial neoplasia of low malignant potential (PUNLMP) and non-invasive papillary carcinoma (low or high grade). The initial proposal of the 2004 WHO has been achieved, with most reports supporting that categories are better defined than in previous classifications. An additional important issue is that PUNLMP, the most controversial proposal of the WHO in 2004, has lower malignant behaviour than low-grade carcinoma. Whether PUNLMP remains a clinically useful category, or whether this category should be expanded to include all low-grade, stage Ta lesions (PUNLMP and low-grade papillary carcinoma) as a wider category of less aggressive tumours not labelled as cancer, needs to be discussed in the near future. This article summarizes the recent literature concerning important issues in the pathology and the clinical management of patients with bladder urothelial carcinoma. Emphasis is placed on clinical presentation, the significance of haematuria, macroscopic appearance (papillary, solid or mixed, single or multiple) and synchronous or metachronous presentation (field disease vs monoclonal disease with seeding), classification and microscopic variations of bladder cancer with clinical significance, TNM distribution and the pathological grading according to the 2004 WHO proposal.", "title": "" }, { "docid": "6b9663085968c5483c9a2871b4807524", "text": "E-Commerce is one of the crucial trading methods worldwide. Hence, it is important to understand consumers’ online purchase intention. This research aims to examine factors that influence consumers’ online purchase intention among university students in Malaysia. Quantitative research approach has been adapted in this research by distributing online questionnaires to 250 Malaysian university students aged between 20-29 years old, who possess experience in online purchases. Findings of this research have discovered that trust, perceived usefulness and subjective norm are the significant factors in predicting online purchase intention. However, perceived ease of use and perceived enjoyment are not significant in predicting the variance in online purchase intention. The findings also revealed that subjective norm is the most significant predicting factor on online purchase intention among university students in Malaysia. Findings of this research will provide online marketers with a better understanding on online purchase intention which enable them to direct effective online marketing strategies.", "title": "" }, { "docid": "e715b87fc145d80dbab179abcc85c14b", "text": "This paper proposes an efficient multi-view 3D reconstruction method based on randomization and propagation scheme. Our method progressively refines a 3D model of a given scene by randomly perturbing the initial guess of 3D points and propagating photo-consistent ones to their neighbors. While finding local optima is an ordinary method for better photo-consistency, our randomization and propagation takes lucky matchings to spread better points replacing old ones for reducing the computational complexity. Experiments show favorable efficiency of the proposed method accompanied by competitive accuracy with the state-of-the-art methods.", "title": "" }, { "docid": "4d405c1c2919be01209b820f61876d57", "text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.", "title": "" }, { "docid": "60c976cb53d5128039e752e5f797f110", "text": "This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.", "title": "" }, { "docid": "7ef3829b1fab59c50f08265d7f4e0132", "text": "Muscle glycogen is the predominant energy source for soccer match play, though its importance for soccer training (where lower loads are observed) is not well known. In an attempt to better inform carbohydrate (CHO) guidelines, we quantified training load in English Premier League soccer players (n = 12) during a one-, two- and three-game week schedule (weekly training frequency was four, four and two, respectively). In a one-game week, training load was progressively reduced (P < 0.05) in 3 days prior to match day (total distance = 5223 ± 406, 3097 ± 149 and 2912 ± 192 m for day 1, 2 and 3, respectively). Whilst daily training load and periodisation was similar in the one- and two-game weeks, total accumulative distance (inclusive of both match and training load) was higher in a two-game week (32.5 ± 4.1 km) versus one-game week (25.9 ± 2 km). In contrast, daily training total distance was lower in the three-game week (2422 ± 251 m) versus the one- and two-game weeks, though accumulative weekly distance was highest in this week (35.5 ± 2.4 km) and more time (P < 0.05) was spent in speed zones >14.4 km · h(-1) (14%, 18% and 23% in the one-, two- and three-game weeks, respectively). Considering that high CHO availability improves physical match performance but high CHO availability attenuates molecular pathways regulating training adaptation (especially considering the low daily customary loads reported here, e.g., 3-5 km per day), we suggest daily CHO intake should be periodised according to weekly training and match schedules.", "title": "" } ]
scidocsrr
2c900f9a50c5d0734e0b36dd9b94a16d
Three Levels Load Balancing on Cloudsim
[ { "docid": "8a7cf92704d06baee24cb6f2a551094d", "text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamically scalable and often virtualized resources are provided as a service over the Internet has become a significant issue. The cloud computing refers to a class of systems and applications that employ distributed resources to perform a function in a decentralized manner. Cloud computing is to utilize the computing resources (service nodes) on the network to facilitate the execution of complicated tasks that require large-scale computation. Thus, the selecting nodes for executing a task in the cloud computing must be considered, and to exploit the effectiveness of the resources, they have to be properly selected according to the properties of the task. However, in this study, a two-phase scheduling algorithm under a three-level cloud computing network is advanced. The proposed scheduling algorithm combines OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) scheduling algorithms that can utilize more better executing efficiency and maintain the load balancing of system.", "title": "" } ]
[ { "docid": "c2392b947816f271f4b7a71ff343bceb", "text": "The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key PointsOverall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility.Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility.End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test).When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility.", "title": "" }, { "docid": "7aa8fa3d64f6a03f121acd8dd0899e7e", "text": "Dyslexia is more than just difficulty with translating letters into sounds. Many dyslexics have problems with clearly seeing letters and their order. These difficulties may be caused by abnormal development of their visual \"magnocellular\" (M) nerve cells; these mediate the ability to rapidly identify letters and their order because they control visual guidance of attention and of eye fixations. Evidence for M cell impairment has been demonstrated at all levels of the visual system: in the retina, in the lateral geniculate nucleus, in the primary visual cortex and throughout the dorsal visuomotor \"where\" pathway forward from the visual cortex to the posterior parietal and prefrontal cortices. This abnormality destabilises visual perception; hence, its severity in individuals correlates with their reading deficit. Treatments that facilitate M function, such as viewing text through yellow or blue filters, can greatly increase reading progress in children with visual reading problems. M weakness may be caused by genetic vulnerability, which can disturb orderly migration of cortical neurones during development or possibly reduce uptake of omega-3 fatty acids, which are usually obtained from fish oils in the diet. For example, M cell membranes require replenishment of the omega-3 docosahexaenoic acid to maintain their rapid responses. Hence, supplementing some dyslexics' diets with DHA can greatly improve their M function and their reading.", "title": "" }, { "docid": "143a4fcc0f2949e797e6f51899e811e2", "text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.", "title": "" }, { "docid": "2013e66a3f96ab6c65daa1a0f8244ec9", "text": "Recent years have seen a dramatic growth of semantic web on the data level, but unfortunately not on the schema level, which contains mostly concept hierarchies. The shortage of schemas makes the semantic web data difficult to be used in many semantic web applications, so schemas learning from semantic web data becomes an increasingly pressing issue. In this paper we propose a novel schemas learning approach -BelNet, which combines description logics (DLs) with Bayesian networks. In this way BelNet is capable to understand and capture the semantics of the data on the one hand, and to handle incompleteness during the learning procedure on the other hand. The main contributions of this work are: (i)we introduce the architecture of BelNet, and corresponding lypropose the ontology learning techniques in it, (ii) we compare the experimental results of our approach with the state-of-the-art ontology learning approaches, and provide discussions from different aspects.", "title": "" }, { "docid": "54581984ce217217d59b7118721e2f60", "text": "Exposure to antibiotics induces the expression of mutagenic bacterial stress-response pathways, but the evolutionary benefits of these responses remain unclear. One possibility is that stress-response pathways provide a short-term advantage by protecting bacteria against the toxic effects of antibiotics. Second, it is possible that stress-induced mutagenesis provides a long-term advantage by accelerating the evolution of resistance. Here, we directly measure the contribution of the Pseudomonas aeruginosa SOS pathway to bacterial fitness and evolvability in the presence of sublethal doses of ciprofloxacin. Using short-term competition experiments, we demonstrate that the SOS pathway increases competitive fitness in the presence of ciprofloxacin. Continued exposure to ciprofloxacin results in the rapid evolution of increased fitness and antibiotic resistance, but we find no evidence that SOS-induced mutagenesis accelerates the rate of adaptation to ciprofloxacin during a 200 generation selection experiment. Intriguingly, we find that the expression of the SOS pathway decreases during adaptation to ciprofloxacin, and this helps to explain why this pathway does not increase long-term evolvability. Furthermore, we argue that the SOS pathway fails to accelerate adaptation to ciprofloxacin because the modest increase in the mutation rate associated with SOS mutagenesis is offset by a decrease in the effective strength of selection for increased resistance at a population level. Our findings suggest that the primary evolutionary benefit of the SOS response is to increase bacterial competitive ability, and that stress-induced mutagenesis is an unwanted side effect, and not a selected attribute, of this pathway.", "title": "" }, { "docid": "5f1f7847600207d1216384f8507be63b", "text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.", "title": "" }, { "docid": "8680ee8f949e02529d6914fcea6f7a5b", "text": "Natural language inference (NLI) is one of the most important tasks in NLP. In this study, we propose a novel method using word dictionaries, which are pairs of a word and its definition, as external knowledge. Our neural definition embedding mechanism encodes input sentences with the definitions of each word of the sentences on the fly. It can encode definitions of words considering the context of the input sentences by using an attention mechanism. We evaluated our method using WordNet as a dictionary and confirmed that it performed better than baseline models when using the full or a subset of 100d GloVe as word embeddings.", "title": "" }, { "docid": "ffbebb5d8f4d269353f95596c156ba5c", "text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.", "title": "" }, { "docid": "5bef975924d427c3ae186d92a93d4f74", "text": "The Voronoi diagram of a set of sites partitions space into regions, one per site; the region for a site s consists of all points closer to s than to any other site. The dual of the Voronoi diagram, the Delaunay triangulation, is the unique triangulation such that the circumsphere of every simplex contains no sites in its interior. Voronoi diagrams and Delaunay triangulations have been rediscovered or applied in many areas of mathematics and the natural sciences; they are central topics in computational geometry, with hundreds of papers discussing algorithms and extensions. Section 27.1 discusses the definition and basic properties in the usual case of point sites in R with the Euclidean metric, while Section 27.2 gives basic algorithms. Some of the many extensions obtained by varying metric, sites, environment, and constraints are discussed in Section 27.3. Section 27.4 finishes with some interesting and nonobvious structural properties of Voronoi diagrams and Delaunay triangulations.", "title": "" }, { "docid": "0b2cff582a4b7d81b42e5bab2bd7a237", "text": "The increasing popularity of real-world recommender systems produces data continuously and rapidly, and it becomes more realistic to study recommender systems under streaming scenarios. Data streams present distinct properties such as temporally ordered, continuous and high-velocity, which poses tremendous challenges to traditional recommender systems. In this paper, we investigate the problem of recommendation with stream inputs. In particular, we provide a principled framework termed sRec, which provides explicit continuous-time random process models of the creation of users and topics, and of the evolution of their interests. A variational Bayesian approach called recursive meanfield approximation is proposed, which permits computationally efficient instantaneous on-line inference. Experimental results on several real-world datasets demonstrate the advantages of our sRec over other state-of-the-arts.", "title": "" }, { "docid": "6936b03672c64798ca4be118809cc325", "text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.", "title": "" }, { "docid": "bf9910e87c2294e307f142e0be4ed4f6", "text": "The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.", "title": "" }, { "docid": "49dd1fd4640a160ba41fed048b2c804b", "text": "This paper proposes a novel method to predict increases in YouTube viewcount driven from the Twitter social network. Specifically, we aim to predict two types of viewcount increases: a sudden increase in viewcount (named as Jump), and the viewcount shortly after the upload of a new video (named as Early). Experiments on hundreds of thousands of videos and millions of tweets show that Twitter-derived features alone can predict whether a video will be in the top 5% for Early popularity with 0.7 Precision@100. Furthermore, our results reveal that while individual influence is indeed important for predicting how Twitter drives YouTube views, it is a diversity of interest from the most active to the least active Twitter users mentioning a video (measured by the variation in their total activity) that is most informative for both Jump and Early prediction. In summary, by going beyond features that quantify individual influence and additionally leveraging collective features of activity variation, we are able to obtain an effective cross-network predictor of Twitter-driven YouTube views.", "title": "" }, { "docid": "fa6ec1ea2a509c837cd65774a78d5d2e", "text": "Critically ill patients frequently experience poor sleep, characterized by frequent disruptions, loss of circadian rhythms, and a paucity of time spent in restorative sleep stages. Factors that are associated with sleep disruption in the intensive care unit (ICU) include patient-ventilator dysynchrony, medications, patient care interactions, and environmental noise and light. As the field of critical care increasingly focuses on patients' physical and psychological outcomes following critical illness, understanding the potential contribution of ICU-related sleep disruption on patient recovery is an important area of investigation. This review article summarizes the literature regarding sleep architecture and measurement in the critically ill, causes of ICU sleep fragmentation, and potential implications of ICU-related sleep disruption on patients' recovery from critical illness. With this background information, strategies to optimize sleep in the ICU are also discussed.", "title": "" }, { "docid": "db83ca64b54bbd54b4097df425c48017", "text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.", "title": "" }, { "docid": "7c1ce170b4258e46f98c24209f0f6def", "text": "It has been widely accepted that iris biometric systems are not subject to a template aging effect. Baker et al. [1] recently presented the first published evidence of a template aging effect, using images acquired from 2004 through 2008 with an LG 2200 iris imaging system, representing a total of 13 subjects (26 irises). We report on a template aging study involving two different iris recognition algorithms, a larger number of subjects (43), a more modern imaging system (LG 4000), and over a shorter time-lapse (2 years). We also investigate the degree to which the template aging effect may be related to pupil dilation and/or contact lenses. We find evidence of a template aging effect, resulting in an increase in match hamming distance and false reject rate.", "title": "" }, { "docid": "db50001ee0a3ee4da8982541591447d1", "text": "This paper introduces a tool to automatically generate meta-data from game sprite sheets. MuSSE is a tool developed to extract XML data from sprite sheet images with non-uniform - multi-sized - sprites. MuSSE (Multi-sized Sprite Sheet meta-data Exporter) is based on a Blob detection algorithm that incorporates a connected-component labeling system. Hence, blobs of arbitrary size can be extracted by adjusting component connectivity parameters. This image detection algorithm defines boundary blobs for each individual sprite in a sprite sheet. Every specific blob defines a sprite characteristic within the sheet: position, name and size, which allows for subsequent data specification for each blob/image. Several examples on real images illustrate the performance of the proposed algorithm and working tool.", "title": "" }, { "docid": "5a805b6f9e821b7505bccc7b70fdd557", "text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.", "title": "" }, { "docid": "eff8993770389a798eeca4996c69474a", "text": "Swarm intelligence is a research field that models the collective intelligence in swarms of insects or animals. Many algorithms that simulates these models have been proposed in order to solve a wide range of problems. The Artificial Bee Colony algorithm is one of the most recent swarm intelligence based algorithms which simulates the foraging behaviour of honey bee colonies. In this work, modified versions of the Artificial Bee Colony algorithm are introduced and applied for efficiently solving real-parameter optimization problems. 2010 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr