query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
fff8169a9f03b32ce1ca2bc8ac012100
Much ado about grit: A meta-analytic synthesis of the grit literature.
[ { "docid": "c355dc8d0ec6b673cea3f2ab39d13701", "text": "Errors in estimating and forecasting often result from the failure to collect and consider enough relevant information. We examine whether attributes associated with persistence in information acquisition can predict performance in an estimation task. We focus on actively open-minded thinking (AOT), need for cognition, grit, and the tendency to maximize or satisfice when making decisions. In three studies, participants made estimates and predictions of uncertain quantities, with varying levels of control over the amount of information they could collect before estimating. Only AOT predicted performance. This relationship was mediated by information acquisition: AOT predicted the tendency to collect information, and information acquisition predicted performance. To the extent that available information is predictive of future outcomes, actively open-minded thinkers are more likely than others to make accurate forecasts.", "title": "" } ]
[ { "docid": "3bc1b99dec1098d7ae47bc10856a2752", "text": "BACKGROUND\nThe choice of study type is an important aspect of the design of medical studies. The study design and consequent study type are major determinants of a study's scientific quality and clinical value.\n\n\nMETHODS\nThis article describes the structured classification of studies into two types, primary and secondary, as well as a further subclassification of studies of primary type. This is done on the basis of a selective literature search concerning study types in medical research, in addition to the authors' own experience.\n\n\nRESULTS\nThree main areas of medical research can be distinguished by study type: basic (experimental), clinical, and epidemiological research. Furthermore, clinical and epidemiological studies can be further subclassified as either interventional or noninterventional.\n\n\nCONCLUSIONS\nThe study type that can best answer the particular research question at hand must be determined not only on a purely scientific basis, but also in view of the available financial resources, staffing, and practical feasibility (organization, medical prerequisites, number of patients, etc.).", "title": "" }, { "docid": "2486eaddb8b00eabcc32ea4588a9d189", "text": "Ontology design patterns have been pointed out as a promising approach for ontology engineering. The goal of this paper is twofold. Firstly, based on well-established works in Software Engineering, we revisit the notion of ontology patterns in Ontology Engineering to introduce the notion of ontology pattern language as a way to organize related ontology patterns. Secondly, we present an overview of a software process ontology pattern language.", "title": "" }, { "docid": "1a2d9da5b42a7ae5a8dcf5fef48cfe26", "text": "The space of bio-inspired hardware can be partitioned along three axes: phylogeny, ontogeny, and epigenesis. We refer to this as the POE model. Our Embryonics (for embryonic electronics) project is situated along the ontogenetic axis of the POE model and is inspired by the processes of molecular biology and by the embryonic development of living beings. We will describe the architecture of multicellular automata that are endowed with self-replication and self-repair properties. In the conclusion, we will present our major on-going project: a giant self-repairing electronic watch, the BioWatch, built on a new reconfigurable tissue, the electronic wall or e–wall.", "title": "" }, { "docid": "b44ef33f614c4e3aa280a403002ac492", "text": "Over recent decades, globalization has resulted in a steady increase in cross-border financial flows around the world. To build an abstract representation of a real-world financial market situation, we structure the fundamental influences among homogeneous and heterogeneous markets with three types of correlations: the inner-domain correlation between homogeneous markets in various countries, the cross-domain correlation between heterogeneous markets, and the time-series correlation between current and past markets. Such types of correlations in global finance challenge traditional machine learning approaches due to model complexity and nonlinearity. In this paper, we propose a novel cross-domain deep learning approach (Cd-DLA) to learn real-world complex correlations for multiple financial market prediction. Based on recurrent neural networks, which capture the time-series interactions in financial data, our model utilizes the attention mechanism to analyze the inner-domain and cross-domain correlations, and then aggregates all of them for financial forecasting. Experiment results on ten-year financial data on currency and stock markets from three countries prove the performance of our approach over other baselines.", "title": "" }, { "docid": "c0c231cf656c83385a5d1038a29be36e", "text": "This paper describes a study of the reasons for delay in software development that was carried out in 1988 and 1989 in a Software Engineering Department. The aim of the study was to gain an insight into the reasons for differences between plans and reality in development activities in order to be able to take actions for improvement. A classification was used to determine the reasons. One hundred and sixty activities, comprising over 15 000 hours of work, have been analyzed. Actions have been taken in the Department as a result of the study. These actions should enable future projects to follow the plan more closely. The actions for improvement include the introduction of maintenance weeks. Similar studies in other software development departments have shown that the reasons varied widely from one department to another. It is recommended that every department should gain an insight into its reasons for delay in software development so as to be able to take appropriate actions for improvement.", "title": "" }, { "docid": "f3e5311643b0cad6102283b39dc1e2df", "text": "The transition from traditional methods to agile project management methods and the changes needed to obtain their real benefits are difficult to achieve. Applying agile methodologies based on maturity models such as Capability Maturity Model Integration (CMMI) has been the focus of much debate in academic circles and in the software industry. Given the high and widespread rate of failure in adopting agility, and also arising from many of the reasons given to project management, this paper proposes a strategy for implementing agile project management in companies which seek to comply with CMMI by making use of the best practices of Agile Project Management and of the main agile methods and frameworks in a gradual and disciplined manner thereby contributing to the increased success rate of software development projects.", "title": "" }, { "docid": "44fdf1c17ebda2d7b2967c84361a5d9a", "text": "A high-efficiency power amplifier (PA) is important in a Megahertz wireless power transfer (WPT) system. It is attractive to apply the Class-E PA for its simple structure and high efficiency. However, the conventional design for Class-E PA can only ensure a high efficiency for a fixed load. It is necessary to develop a high-efficiency Class-E PA for a wide-range load in WPT systems. A novel design method for Class-E PA is proposed to achieve this objective in this paper. The PA achieves high efficiency, above 80%, for a load ranging from 10 to 100 Ω at 6.78 MHz in the experiment.", "title": "" }, { "docid": "4cde522275c034a8025c75d144a74634", "text": "Novel sentence detection aims at identifying novel information from an incoming stream of sentences. Our research applies named entity recognition (NER) and part-of-speech (POS) tagging on sentence-level novelty detection and proposes a mixed method to utilize these two techniques. Furthermore, we discuss the performance when setting different history sentence sets. Experimental results of different approaches on TREC'04 Novelty Track show that our new combined method outperforms some other novelty detection methods in terms of precision and recall. The experimental observations of each approach are also discussed.", "title": "" }, { "docid": "d7ec8f90efe6e85dc05a6da2be732f9f", "text": "Oral hairy leukoplakia (OHL) is a lesion frequently, although not exclusively, observed in patients infected by human immunodeficiency viruses (HIV). OHL is clinically characterized by bilateral, often elevated, white patches of the lateral borders and dorsum of the tongue. Histologically, there is profound acanthosis, sometimes with koilocytic changes, and a lack of a notable inflammatory infiltrate. The koilocytic changes are due to intense replication of Epstein-Barr virus (EBV), while epithelial hyperplasia and acanthosis are likely to result from the combined action of the EBV-encoded proteins, latent membrane protein-1, and antiapoptotic BHRF1. How OHL is initiated and whether it develops after EBV reactivation from latency or superinfection remain unresolved; nevertheless, definitive diagnosis requires the demonstration of EBV replicating vegetatively in histological or cytological specimens. In patients with HIV infection, the development of OHL may herald severe HIV disease and the rapid onset of AIDS, but despite its title, OHL is not regarded as premalignant and is unlikely to give rise to oral squamous cell carcinoma.", "title": "" }, { "docid": "800cabf6fbdf06c1f8fc6b65f503e13e", "text": "An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.", "title": "" }, { "docid": "bf87ee431012af3a0648fe0ed9aeb61f", "text": "Despite the importance attached to homework in cognitive-behavioral therapy for depression, quantitative studies of its impact on outcome have been limited. One aim of the present study was to replicate a previous finding suggesting that improvement can be predicted from the quality of the client's compliance early in treatment. If homework is indeed an effective ingredient in this form of treatment, it is important to know how compliance can be influenced. The second aim of the present study was to examine the effectiveness of several methods of enhancing compliance that have frequently been recommended to therapists. The data were drawn from 235 sessions received by 25 clients. Therapists' ratings of compliance following the first two sessions of treatment contributed significantly to the prediction of improvement at termination (though not at followup). However, compliance itself could not be predicted from any of the clients' ratings of therapist behavior in recommending the assignments.", "title": "" }, { "docid": "46658067ffc4fd2ecdc32fbaaa606170", "text": "Adolescent resilience research differs from risk research by focusing on the assets and resources that enable some adolescents to overcome the negative effects of risk exposure. We discuss three models of resilience-the compensatory, protective, and challenge models-and describe how resilience differs from related concepts. We describe issues and limitations related to resilience and provide an overview of recent resilience research related to adolescent substance use, violent behavior, and sexual risk behavior. We then discuss implications that resilience research has for intervention and describe some resilience-based interventions.", "title": "" }, { "docid": "dda8427a6630411fc11e6d95dbff08b9", "text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.", "title": "" }, { "docid": "60ebdcd2d3e47ce8a054f2073672f43e", "text": "Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial observations by using finite length observation histories or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to an advantage-like function and is robust to partially observed state. We demonstrate that this new algorithm can substantially outperform strong baseline methods on several partially observed reinforcement learning tasks: learning first-person 3D navigation in Doom and Minecraft, and acting in the presence of partially observed objects in Doom and Pong.", "title": "" }, { "docid": "2c6d8e232c2d609c5ff1577ae39a9bad", "text": "In this paper, we present a framework and a system that extracts events relevant to a query from a collection C of documents, and places such events along a timeline. Each event is represented by a sentence extracted from C, based on the assumption that \"important\" events are widely cited in many documents for a period of time within which these events are of interest. In our experiments, we used queries that are event types (\"earthquake\") and person names (e.g. \"George Bush\"). Evaluation was performed using G8 leader names as queries: comparison made by human evaluators between manually and system generated timelines showed that although manually generated timelines are on average more preferable, system generated timelines are sometimes judged to be better than manually constructed ones.", "title": "" }, { "docid": "370b416dd51cfc08dc9b97f87c500eba", "text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x þ y þ z þ w 1⁄4 1 2 ðx þ y þ z þ wÞ: Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by Corresponding author. E-mail addresses: graham@ucsd.edu (R.L. Graham), jcl@research.att.com (J.C. Lagarias), colinm@ research.avayalabs.com (C.L. Mallows), allan@research.att.com (A.R. Wilks), catherine.yan@math. tamu.edu (C.H. Yan). 1 Current address: Department of Computer Science, University of California at San Diego, La Jolla, CA 92093, USA. 2 Work partly done during a visit to the Institute for Advanced Study. 3 Current address: Avaya Labs, Basking Ridge, NJ 07920, USA. 0022-314X/03/$ see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-314X(03)00015-5 congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple. r 2003 Elsevier Science (USA). All rights reserved.", "title": "" }, { "docid": "482747b01e4b0e74349894db308784cb", "text": "In Cognitive Radio Networks (CRNs), dynamic spectrum access allows (unlicensed) users to identify and access unused channels opportunistically, thus improves spectrum utility. In this paper, we address the user-channel allocation problem in multi-user multi-channel CRNs without a prior knowledge of channel statistics. A reward of a channel is stochastic with unknown distribution, and statistically different for each user. Each user either explores a channel to learn the channel statistics, or exploits the channel with the highest expected reward based on information collected so far. Further, a channel should be accessed exclusively by one user at a time due to a collision. Using multi-armed bandit framework, we develop a provably efficient solution whose computational complexity is linear to the number of users and channels.", "title": "" }, { "docid": "f96098449988c433fe8af20be0c468a5", "text": "Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.", "title": "" }, { "docid": "a89761358ab819ff110458948a6af44d", "text": "Automatic abusive language detection is a difficult but important task for online social media. Our research explores a twostep approach of performing classification on abusive language and then classifying into specific types and compares it with one-step approach of doing one multi-class classification for detecting sexist and racist languages. With a public English Twitter corpus of 20 thousand tweets in the type of sexism and racism, our approach shows a promising performance of 0.827 Fmeasure by using HybridCNN in one-step and 0.824 F-measure by using logistic regression in two-steps.", "title": "" }, { "docid": "bac27eec278ebbe4320ea773b55defe5", "text": "On theoretical, methodological, and practical grounds, this paper argues the case for conducting processual studies of organizational change. Such process studies may be conducted through a research approach which is not only longitudinal but also seeks to analyze processes in their intra-organizational and social, economic, political and business context. This paper outlines some of the epistemological and craft features of contextualist research and ends by posing questions about the evaluation of research conducted in a contextualist manner. Contrary to the way the practice of research is often taught and written up, the activity of research is clearly a social process and not merely a rationally contrived act. Furthermore it is a social process descriptively more easily characterized in the language of muddling through, incrementalism, and political process than it is as rational, foresightful, goal directed activity. Indeed it seems naive and two-faced of us to recognize, on the one hand, the now familiar notions that problem solving and decision making processes in organizations have elements of political process (Pettigrew 1973a), incrementalism", "title": "" } ]
scidocsrr
c62a3b776393389e1b2fa156c4eb1afc
Combinatorial Testing for Deep Learning Systems
[ { "docid": "e829a9400cfb2723fdb6a6d3d939c070", "text": "Exhaustive testing of computer software is intractable, but empirical studies of software failures suggest that testing can in some cases be effectively exhaustive. We show that software failures in a variety of domains were caused by combinations of relatively few conditions. These results have important implications for testing. If all faults in a system can be triggered by a combination of n or fewer parameters, then testing all n-tuples of parameters is effectively equivalent to exhaustive testing, if software behavior is not dependent on complex event sequences and variables have a small set of discrete values.", "title": "" }, { "docid": "fe5a43325e2bbedf9679cc6c30e083f0", "text": "Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. Most existing approaches for crafting adversarial examples necessitate some knowledge (architecture, parameters, etc) of the network at hand. In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge. Our algorithm employs object detection techniques such as SIFT (Scale Invariant Feature Transform) to extract features from an image. These features are converted into a mutable saliency distribution, where high probability is assigned to pixels that affect the composition of the image with respect to the human visual system. We formulate the crafting of adversarial examples as a two-player turn-based stochastic game, where the first player’s objective is to minimise the distance to an adversarial example by manipulating the features, and the second player can be cooperative, adversarial, or random. We show that, theoretically, the two-player game can converge to the optimal strategy, and that the optimal strategy represents a globally minimal adversarial image. For Lipschitz networks, we also identify conditions that provide safety guarantees that no adversarial examples exist. Using Monte Carlo tree search we gradually explore the game state space to search for adversarial examples. Our experiments show that, despite the black-box setting, manipulations guided by a perception-based saliency distribution are competitive with state-of-the-art methods that rely on white-box saliency matrices or sophisticated optimization procedures. Finally, we show how our method can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.", "title": "" } ]
[ { "docid": "ca072e97f8a5486347040aeaa7909d60", "text": "Camera-based stereo-vision provides cost-efficient vision capabilities for robotic systems. The objective of this paper is to examine the performance of stereo-vision as means to enable a robotic inspection cell for haptic quality testing with the ability to detect relevant information related to the inspection task. This information comprises the location and 3D representation of a complex object under inspection as well as the location and type of quality features which are subject to the inspection task. Among the challenges is the low-distinctiveness of features in neighboring area, inconsistent lighting, similar colors as well as low intra-class variances impeding the retrieval of quality characteristics. The paper presents the general outline of the vision chain as well as performance analysis of various algorithms for relevant steps in the machine vision chain thus indicating the capabilities and drawbacks of a camera-based stereo-vision for flexible use in complex machine vision tasks.", "title": "" }, { "docid": "a207a478869f987dd4bd2cc6b5b9e9a3", "text": "Many operations in power grids, such as fault detection and event location estimation, depend on precise timing information. In this paper, a novel Time Synchronization Attack (TSA) is proposed to attack the timing information in smart grid. Since many applications in smart grid utilize synchronous measurements and most of the measurement devices are equipped with global positioning system (GPS) for precise timing, it is highly probable to attack the measurement system by spoofing the GPS. The effectiveness of TSA is demonstrated for three applications of phasor measurement unit (PMU) in smart grid, namely transmission line fault detection, voltage stability monitoring and event locationing. The validity of TSA is demonstrated by numerical simulations.", "title": "" }, { "docid": "3c9857605589542835fdcc3b5d54e2bd", "text": "Theory, design, realization and measurements of an X-band isoflux circularly polarized antenna for LEO satellite platforms are presented. The antenna is based on a metasurface composed by a dense texture of sub-wavelength metal patches on a grounded dielectric slab, excited by a surface wave generated by a coplanar feeder. The antenna is extremely flat (1.57 mm) and light (less than 1 Kg) and represents a competitive solution for space-to-ground data link applications.", "title": "" }, { "docid": "870674d3ab86ad52116e9f0dd4e9605c", "text": "Due to the global need for oil production and distribution, surrounding ecosystems have been negatively affected by oil spill externalities in individual health and community diversity. Conventional land remediation techniques run the risk of leaving chemical residues, and interacting with metals in the soil. The objective of this study was to test worm compost tea, also known as vermitea, as a bioremediation method to replace current techniques used on oil contaminated soils. To test the conditions that contributed to the efficacy of the teas, I examined different teas that looked into the mode and length of pollutant exposure. I examined oil emulsification activity, presence of biosurfactant-producing bacteria colonies, microbial diversity and abundance, and applicability of the teas to artificially contaminated soils. Overall, I found that the long-term direct oil tea had a 7.42% significant increase in biosurfactant producing microbes in comparison to the control tea. However, the long-term crude soil vermitea was found to be the best type of pollutant degrading tea in terms of emulsifying activity and general applicability towards reducing oil concentrations in the soil. These results will help broaden the scientific understanding towards stimulated microbial degradation of pollution, and broaden the approaches that can be taken in restoring polluted ecosystems.", "title": "" }, { "docid": "5109892c554f7fed68136f43b8c05bb8", "text": "Obese white adipose tissue (AT) is characterized by large-scale infiltration of proinflammatory macrophages, in parallel with systemic insulin resistance; however, the cellular stimulus that initiates this signaling cascade and chemokine release is still unknown. The objective of this study was to determine the role of the phosphoinositide 3-kinase (PI3K) regulatory subunits on AT macrophage (ATM) infiltration in obesity. Here, we find that the Pik3r1 regulatory subunits (i.e., p85a/p55a/p50a) are highly induced in AT from high-fat diet–fed obese mice, concurrent with insulin resistance. Global heterozygous deletion of the Pik3r1 regulatory subunits (aHZ), but not knockout of Pik3r2 (p85b), preserves whole-body, AT, and skeletal muscle insulin sensitivity, despite severe obesity. Moreover, ATM accumulation, proinflammatory gene expression, and ex vivo chemokine secretion in obese aHZ mice are markedly reduced despite endoplasmic reticulum (ER) stress, hypoxia, adipocyte hypertrophy, and Jun NH2-terminal kinase activation. Furthermore, bone marrow transplant studies reveal that these improvements in obese aHZ mice are independent of reduced Pik3r1 expression in the hematopoietic compartment. Taken together, these studies demonstrate that Pik3r1 expression plays a critical role in mediating AT insulin sensitivity and, more so, suggest that reduced PI3K activity is a key step in the initiation and propagation of the inflammatory response in obese AT.", "title": "" }, { "docid": "a605dc0c5beb6da4ef82d36da491fea7", "text": "This paper presents an efficient hydroponic nutrient solution control system whose system parameters are optimized using genetic algorithm. A novel mamdani fuzzy inference system (FIS) that grades the quality of solution for a given set of control parameters has been used as its fitness function. The FIS evaluation function has been designed using expert opinion from researchers at Murugappa Chettiar Research Centre, India. To evaluate the performance of the proposed algorithm, a virtual hydroponic nutrient control system with a solution monitoring unit was designed using Labview. The designed algorithm demonstrated better convergence efficiency and resource utilization compared to conventional error function based nutrient solution control systems.", "title": "" }, { "docid": "5adc69802a73880b24286bd99d59fdcc", "text": "In this paper, the guidelines to design a high-voltage power converter based on the hybrid series parallel resonant topology, PRC-LCC, with a capacitor as output filter are established. As a consequence of the selection of this topology, transformer ratio, and therefore secondary volume, is reduced. The mathematical analysis provides an original equivalent circuit for the steady-state and dynamical behavior of the topology. A new way to construct high-voltage transformers is also proposed, pointing out the advantages and establishing an original method to evaluate the stray components of the transformer before construction. The way to make compatible the characteristics of both, topology and transformer is illustrated in the frame of a practical application. To demonstrate the feasibility of this solution, a high-voltage, high-power prototype is assembled and tested with good performance and similar behavior to the one predicted by the models. Experimental results are shown on this particular.", "title": "" }, { "docid": "70c8caf1bdbdaf29072903e20c432854", "text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.", "title": "" }, { "docid": "73128099f3ddd19e4f88d10cdafbd506", "text": "BACKGROUND\nRecently, there has been an increased interest in the effects of essential oils on athletic performances and other physiological effects. This study aimed to assess the effects of Citrus sinensis flower and Mentha spicata leaves essential oils inhalation in two different groups of athlete male students on their exercise performance and lung function.\n\n\nMETHODS\nTwenty physical education students volunteered to participate in the study. The subjects were randomly assigned into two groups: Mentha spicata and Citrus sinensis (ten participants each). One group was nebulized by Citrus sinensis flower oil and the other by Mentha spicata leaves oil in a concentration of (0.02 ml/kg of body mass) which was mixed with 2 ml of normal saline for 5 min before a 1500 m running tests. Lung function tests were measured using a spirometer for each student pre and post nebulization giving the same running distance pre and post oils inhalation.\n\n\nRESULTS\nA lung function tests showed an improvement on the lung status for the students after inhaling of the oils. Interestingly, there was a significant increase in Forced Expiratory Volume in the first second and Forced Vital Capacity after inhalation for the both oils. Moreover significant reductions in the means of the running time were observed among these two groups. The normal spirometry results were 50 %, while after inhalation with M. spicata oil the ratio were 60 %.\n\n\nCONCLUSION\nOur findings support the effectiveness of M. spicata and C. sinensis essential oils on the exercise performance and respiratory function parameters. However, our conclusion and generalisability of our results should be interpreted with caution due to small sample size and lack of control groups, randomization or masking. We recommend further investigations to explain the mechanism of actions for these two essential oils on exercise performance and respiratory parameters.\n\n\nTRIAL REGISTRATION\nISRCTN10133422, Registered: May 3, 2016.", "title": "" }, { "docid": "404b019df8328bdc423fde18ab4a6fd6", "text": "The purpose of this study was to determine whether 155 ethnically diverse clients with traumatic brain injury (TBI) and stroke (cerebrovascular accident; CVA) who received occupational therapy services perceived that they reached self-identified goals related to tasks of daily life as measured by the Canadian Occupational Performance Measure (COPM). This study found that a statistically and clinically significant change in self-perceived performance and satisfaction with tasks of daily life occurred at the end of a client-centered occupational therapy program (p < .001). There were no significant differences in performance and satisfaction between the TBI and CVA groups. However, the group with right CVA reported a higher level of satisfaction with performance in daily activities than the group with left CVA (p = .03). The COPM process can effectively assist clients with neurological impairments in identifying meaningful occupational performance goals. The occupational therapist also can use the COPM to design occupation-based and client-centered intervention programs and measure occupational therapy outcomes.", "title": "" }, { "docid": "c92138588c6f33bb4428dba0ed512eba", "text": "Today is an age of Big data. Big data is the normally unstructured data. Apache Hive is largely used for analysis in process of huge data. Because it is like SQL so easy to get analytical report. The main problem is that unstructured data loading and storage as well as Fast and timely analysis of large amount of data. There are data Compression columnar format like ORC(Optimized Row And Columnar) and Parquet columnar format. In this paper we used USGS (United States Geological Survey) Earthquake dataset. USGS provides the multi-Dimension dataset of earthquake of every day, week and month. We applied hadoop Hive's ORC format On monthly USGS earthquake dataset. ORC format Stored dataset efficiently without lose so that the most important data without losing stored on HDFS. We compare result of ORC Sorted and Unsorted dataset on the basses of time required to load the dataset on HDFS.", "title": "" }, { "docid": "653ceb874af4ba288375b75860abf076", "text": "This survey attempts to provide a comprehensive and structured overview of the existing research for the problem of detecting anomalies in discrete sequences. The aim is to provide a global understanding of the sequence anomaly detection problem and how techniques proposed for different domains relate to each other. Our specific contributions are as follows: We identify three distinct formulations of the anomaly detection problem, and review techniques from many disparate and disconnected domains that address each of these formulations. Within each problem formulation, we group techniques into categories based on the nature of the underlying algorithm. For each category, we provide a basic anomaly detection technique, and show how the existing techniques are variants of the basic technique. This approach shows how different techniques within a category are related or different from each other. Our categorization reveals new variants and combinations that have not been investigated before for anomaly detection. We also provide a discussion of relative strengths and weaknesses of different techniques. We show how techniques developed for one problem formulation can be adapted to solve a different formulation; thereby providing several novel adaptations to solve the different problem formulations. We highlight the applicability of the techniques that handle discrete sequences to other related areas such as online anomaly detection and time series anomaly detection.", "title": "" }, { "docid": "41c2a5a3a354c670b91ab4bcd5b6c9ff", "text": "Two classes of modern missing data procedures, maximum likelihood (ML) and multiple imputation (MI), tend to yield similar results when implemented in comparable ways. In either approach, it is possible to include auxiliary variables solely for the purpose of improving the missing data procedure. A simulation was presented to assess the potential costs and benefits of a restrictive strategy, which makes minimal use of auxiliary variables, versus an inclusive strategy, which makes liberal use of such variables. The simulation showed that the inclusive strategy is to be greatly preferred. With an inclusive strategy not only is there a reduced chance of inadvertently omitting an important cause of missingness, there is also the possibility of noticeable gains in terms of increased efficiency and reduced bias, with only minor costs. As implemented in currently available software, the ML approach tends to encourage the use of a restrictive strategy, whereas the MI approach makes it relatively simple to use an inclusive strategy.", "title": "" }, { "docid": "c3f2726c10ebad60d715609f15b67b43", "text": "Sleep-waking cycles are fundamental in human circadian rhythms and their disruption can have consequences for behaviour and performance. Such disturbances occur due to domestic or occupational schedules that do not permit normal sleep quotas, rapid travel across multiple meridians and extreme athletic and recreational endeavours where sleep is restricted or totally deprived. There are methodological issues in quantifying the physiological and performance consequences of alterations in the sleep-wake cycle if the effects on circadian rhythms are to be separated from the fatigue process. Individual requirements for sleep show large variations but chronic reduction in sleep can lead to immuno-suppression. There are still unanswered questions about the sleep needs of athletes, the role of 'power naps' and the potential for exercise in improving the quality of sleep.", "title": "" }, { "docid": "d8bd48a231374a82f31e6363881335c4", "text": "Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguisticallyplausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples.", "title": "" }, { "docid": "60be5aa3a7984f0e057d92ae74fae916", "text": "Reading requires the interaction between multiple cognitive processes situated in distant brain areas. This makes the study of functional brain connectivity highly relevant for understanding developmental dyslexia. We used seed-voxel correlation mapping to analyse connectivity in a left-hemispheric network for task-based and resting-state fMRI data. Our main finding was reduced connectivity in dyslexic readers between left posterior temporal areas (fusiform, inferior temporal, middle temporal, superior temporal) and the left inferior frontal gyrus. Reduced connectivity in these networks was consistently present for 2 reading-related tasks and for the resting state, showing a permanent disruption which is also present in the absence of explicit task demands and potential group differences in performance. Furthermore, we found that connectivity between multiple reading-related areas and areas of the default mode network, in particular the precuneus, was stronger in dyslexic compared with nonimpaired readers.", "title": "" }, { "docid": "5aead46411e6adc442509f2ce11167e9", "text": "We present an outline of our newly created multimodal dialogue corpus that is constructed from public domain movies. Dialogues in movies are useful sources for analyzing human communication patterns. In addition, they can be used to train machine-learning-based dialogue processing systems. However, the movie files are processing intensive and they contain large portions of non-dialogue segments. Therefore, we created a corpus that contains only dialogue segments from movies. The corpus contains 165,368 dialogue segments taken from 1,722 movies. These dialogues are automatically segmented by using deep neural network-based voice activity detection with filtering rules. Our corpus can reduce the human workload and machine-processing effort required to analyze human dialogue behavior by using movies.", "title": "" }, { "docid": "e9017607252973b36f9d4c3c659fe858", "text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————", "title": "" }, { "docid": "1a98b0d00afd29474fb40b76ca2b0ce6", "text": "The intended readership of this volume is the full range of behavioral scientists, mental health professionals, and students aspiring to such roles who work with children. This includes psychologists (applied, clinical, counseling, developmental, school, including academics, researchers, and practitioners), family counselors, psychiatrists, social workers, psychiatric nurses, child protection workers, and any other mental health professionals who work with children, adolescents, and their families.", "title": "" }, { "docid": "40839b46d8e5593d6c59e04cd5ec2316", "text": "The main focus of image mining is concerned with the classification of brain tumor in the CT scan brain images. The major steps involved in the system are: pre-processing, feature extraction, association rule mining and classification. Here, we present some experiments for tumor detection in MRI images. The pre-processing step has been done using the median filtering process and features have been extracted using texture feature extraction technique. The extracted features from the CT scan images are used to mine the association rules. The proposed method is used to classify the medical images for diagnosis. In this system we are going to use Decision Tree classification algorithm. The proposed method improves the efficiency than the traditional image mining methods. Here, results which we get are compared with Naive Bayesian classification algorithm.", "title": "" } ]
scidocsrr
bc62621f99fdf6a83e53e7c87417da5b
Design guidelines for a wearable robotic extra-finger
[ { "docid": "897633eee10c6bda8a3931b0fbf4d360", "text": "We present the Supernumerary Robotic Limbs (SRL), a wearable robot designed to assist human workers with additional arms and legs attached to the wearer's body. The SRL can work closely with the wearer by holding an object, positioning a workpiece, operating a powered tool, securing the human body, and more. Although the SRL has the potential to provide the wearer with greater strength, higher accuracy, flexibility, and dexterity, its control performance is hindered by unpredictable disturbances due to involuntary motions of the wearer, which include postural sway and physiological tremor. This paper presents 1) a Kalman filter approach to estimate the state of the SRL despite the involuntary wearer's motion, and 2) a method for improving the accuracy and stabilizing the human body and the SRL. The dynamics of the human-SRL system are analyzed, including human-induced disturbance models based on biomechanics literature. A discrete Kalman filter is constructed and its performance is evaluated in terms of error covariance. A “bracing” technique is then introduced to suppress the human-induced disturbances; one robotic limb grasps an environment structure and uses it as a support to attenuate the disturbances. We show how bracing can be used to shape the stiffness parameters at the robot base. This in turn allows to enhance state estimation accuracy in the areas of the workspace where the user needs assistance.", "title": "" } ]
[ { "docid": "2e6193301f53719e58782bece34cb55a", "text": "There is an increasing trend in using robots for medical purposes. One specific area is the rehabilitation. There are some commercial exercise machines used for rehabilitation purposes. However, these machines have limited use because of their insufficient motion freedom. In addition, these types of machines are not actively controlled and therefore can not accommodate complicated exercises required during rehabilitation. In this study, a rule based intelligent control methodology is proposed to imitate the faculties of an experienced physiotherapist. These involve interpretation of patient reactions, storing the information received, acting according to the available data, and learning from the previous experiences. Robot manipulator is driven by a servo motor and controlled by a computer using force/torque and position sensor information. Impedance control technique is selected for the force control.", "title": "" }, { "docid": "aa74720aa2d191b9eb25104ee3a33b1e", "text": "We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.", "title": "" }, { "docid": "b01532d16cdc3d9e53a88a4e4fe2806d", "text": "We propose CAML, a meta-learning method for fast adaptation that partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), our method can be scaled up to larger networks without overfitting on a single task, is easier to implement, and saves memory writes during training and network communication at test time for distributed machine learning systems. We show empirically that this approach outperforms MAML, is less sensitive to the task-specific learning rate, can capture meaningful task embeddings with the context parameters, and outperforms alternative partitionings of the parameter vectors.", "title": "" }, { "docid": "9b6191f96f096035429583e8799a2eb2", "text": "Recognition of food images is challenging due to their diversity and practical for health care on foods for people. In this paper, we propose an automatic food image recognition system for 85 food categories by fusing various kinds of image features including bag-of-features~(BoF), color histogram, Gabor features and gradient histogram with Multiple Kernel Learning~(MKL). In addition, we implemented a prototype system to recognize food images taken by cellular-phone cameras. In the experiment, we have achieved the 62.52% classification rate for 85 food categories.", "title": "" }, { "docid": "d202c0bcf5c3bd568da5232a5c5142b3", "text": "In this paper, we revisit author identification research by conducting a new kind of large-scale reproducibility study: we select 15 of the most influential papers for author identification and recruit a group of students to reimplement them from scratch. Since no open source implementations have been released for the selected papers to date, our public release will have a significant impact on researchers entering the field. This way, we lay the groundwork for integrating author identification with information retrieval to eventually scale the former to the web. Furthermore, we assess the reproducibility of all reimplemented papers in detail, and conduct the first comparative evaluation of all approaches on three", "title": "" }, { "docid": "b33eaecf2aff15ecb2f0d256bde7e1bb", "text": "This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.", "title": "" }, { "docid": "4f6ce186679f9ab4f0aaada92ccf5a84", "text": "Sensor networks have a significant potential in diverse applications some of which are already beginning to be deployed in areas such as environmental monitoring. As the application logic becomes more complex, programming difficulties are becoming a barrier to adoption of these networks. The difficulty in programming sensor networks is not only due to their inherently distributed nature but also the need for mechanisms to address their harsh operating conditions such as unreliable communications, faulty nodes, and extremely constrained resources. Researchers have proposed different programming models to overcome these difficulties with the ultimate goal of making programming easy while making full use of available resources. In this article, we first explore the requirements for programming models for sensor networks. Then we present a taxonomy of the programming models, classified according to the level of abstractions they provide. We present an evaluation of various programming models for their responsiveness to the requirements. Our results point to promising efforts in the area and a discussion of the future directions of research in this area.", "title": "" }, { "docid": "c41ea96802e4b4f5de7d438fb54dbc6d", "text": "AIM\nThis study explores nurse managers' experiences in dealing with patient/family violence toward their staff.\n\n\nBACKGROUND\nStudies and guidelines have emphasised the responsibility of nurse managers to manage violence directed at their staff. Although studies on nursing staff have highlighted the ineffectiveness of strategies used by nurse managers, few have explored their perspectives on dealing with violence.\n\n\nMETHODS\nThis qualitative study adopted a grounded theory approach to explore the experiences of 26 Japanese nurse managers.\n\n\nRESULTS\nThe nurse managers made decisions using internalised ethical values, which included maintaining organisational functioning, keeping staff safe, advocating for the patient/family and avoiding moral transgressions. They resolved internal conflicts among their ethical values by repeating a holistic assessment and simultaneous approach consisting of damage control and dialogue. They facilitated the involved persons' understanding, acceptance and sensemaking of the incident, which contributed to a resolution of the internal conflicts among their ethical values.\n\n\nCONCLUSIONS\nNurse managers adhere to their ethical values when dealing with patient violence toward nurses. Their ethical decision-making process should be acknowledged as an effective strategy to manage violence.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nOrganisational strategies that support and incorporate managers' ethical decision-making are needed to prevent and manage violence toward nurses.", "title": "" }, { "docid": "d3c059d0889fc390a91d58aa82980fcc", "text": "In recent trends industries, organizations and many companies are using personal identification strategies like finger print identification, RFID for tracking attendance and etc. Among of all these personal identification strategies face recognition is most natural, less time taken and high efficient one. It’s has several applications in attendance management systems and security systems. The main strategy involve in this paper is taking attendance in organizations, industries and etc. using face detection and recognition technology. A time period is settled for taking the attendance and after completion of time period attendance will directly stores into storage device mechanically without any human intervention. A message will send to absent student parent mobile using GSM technology. This attendance will be uploaded into web server using Ethernet. This raspberry pi 2 module is used in this system to achieve high speed of operation. Camera is interfaced to one USB port of raspberry pi 2. Eigen faces algorithm is used for face detection and recognition technology. Eigen faces algorithm is less time taken and high effective than other algorithms like viola-jones algorithm etc. the attendance will directly stores in storage device like pen drive that is connected to one of the USB port of raspberry pi 2. This system is most effective, easy and less time taken for tracking attendance in organizations with period wise without any human intervention.", "title": "" }, { "docid": "b838cd18098a4824e8ae16d55c297cfb", "text": "While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuomotor policies for real robotic systems without relying entirely on large real-world robot datasets.", "title": "" }, { "docid": "f9b99ad1fcf9963cca29e7ddfca20428", "text": "Nested Named Entities (nested NEs), one containing another, are commonly seen in biomedical text, e.g., accounting for 16.7% of all named entities in GENIA corpus. While many works have been done in recognizing non-nested NEs, nested NEs have been largely neglected. In this work, we treat the task as a binary classification problem and solve it using Support Vector Machines. For each token in nested NEs, we use two schemes to set its class label: labeling as the outmost entity or the inner entity. Our preliminary results show that while the outmost labeling tends to work better in recognizing the outmost entities, the inner labeling recognizes the inner NEs better. This result should be useful for recognition of nested NEs.", "title": "" }, { "docid": "0c8947cbaa2226a024bf3c93541dcae1", "text": "As storage systems grow in size and complexity, they are increasingly confronted with concurrent disk failures together with multiple unrecoverable sector errors. To ensure high data reliability and availability, erasure codes with high fault tolerance are required. In this article, we present a new family of erasure codes with high fault tolerance, named GRID codes. They are called such because they are a family of strip-based codes whose strips are arranged into multi-dimensional grids. In the construction of GRID codes, we first introduce a concept of matched codes and then discuss how to use matched codes to construct GRID codes. In addition, we propose an iterative reconstruction algorithm for GRID codes. We also discuss some important features of GRID codes. Finally, we compare GRID codes with several categories of existing codes. Our comparisons show that for large-scale storage systems, our GRID codes have attractive advantages over many existing erasure codes: (a) They are completely XOR-based and have very regular structures, ensuring easy implementation; (b) they can provide up to 15 and even higher fault tolerance; and (c) their storage efficiency can reach up to 80% and even higher. All the advantages make GRID codes more suitable for large-scale storage systems.", "title": "" }, { "docid": "99582c5c50f5103f15a6777af94c6584", "text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.", "title": "" }, { "docid": "07425e53be0f6314d52e3b4de4d1b601", "text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.", "title": "" }, { "docid": "c2d17d5a5db10efafa4e56a2b6cd7afa", "text": "The main purpose of analyzing the social network data is to observe the behaviors and trends that are followed by people. How people interact with each other, what they usually share, what are their interests on social networks, so that analysts can focus new trends for the provision of those things which are of great interest for people so in this paper an easy approach of gathering and analyzing data through keyword based search in social networks is examined using NodeXL and data is gathered from twitter in which political trends have been analyzed. As a result it will be analyzed that, what people are focusing most in politics.", "title": "" }, { "docid": "45712feb68b83cc054027807c1a30130", "text": "A solar energy semiconductor cooling box is presented in the paper. The cooling box is compact and easy to carry, can be made a special refrigeration unit which is smaller according to user needs. The characteristics of the cooling box are its simple use and maintenance, safe performance, decentralized power supply, convenient energy storage, no environmental pollution, and so on. In addition, compared with the normal mechanical refrigeration, the semiconductor refrigeration system which makes use of Peltier effect does not require pumps, compressors and other moving parts, and so there is no wear and noise. It does not require refrigerant so it will not produce environmental pollution, and it also eliminates the complex transmission pipeline. The concrete realization form of power are “heat - electric - cold”, “light - electric - cold”, “light - heat - electric - cold”. In order to achieve the purpose of cooling, solar cells generate electricity to drive the semiconductor cooling devices. The working principle is mainly photovoltaic effect and the Peltier effect.", "title": "" }, { "docid": "25ad730b651ce9168fb008a6013e184f", "text": "Model-Based Engineering (MBE) is a promising approach to cope with the challenges of designing the next-generation automotive systems. The increasing complexity of automotive electronics, the platform, distributed real-time embedded software, and the need for continuous evolution from one generation to the next has necessitated highly productive design approaches. However, heterogeneity, interoperability, and the lack of formal semantic underpinning in modeling, integration, validation and optimization make design automation a big challenge, which becomes a hindrance to the wider application of MBE in the industry. This paper briefly presents the interoperability challenges in the context of MBE and summarizes our current contribution to address these challenges with regard to automotive control software systems. A novel model-based formal integration framework is being developed to enable architecture modeling, timing specification, formal semantics, design by contract and optimization in the system-level design. The main advantages of the proposed approach include its pervasive use of formal methods, architecture analysis and design language (AADL) and associated tools, a novel timing annex for AADL with an expressive timing relationship language, a formal contract language to express component-level requirements and validation of component integration, and the resulting high assurance system delivery.", "title": "" }, { "docid": "1e6c2319e7c9e51cd4e31107d56bce91", "text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.", "title": "" }, { "docid": "5807ace0e7e4e9a67c46f29a3f2e70e3", "text": "In this work we present a pedestrian navigation system for indoor environments based on the dead reckoning positioning method, 2D barcodes, and data from accelerometers and magnetometers. All the sensing and computing technologies of our solution are available in common smart phones. The need to create indoor navigation systems arises from the inaccessibility of the classic navigation systems, such as GPS, in indoor environments.", "title": "" }, { "docid": "31fe8edc8fa4d336801a4ab8d1d2d5f2", "text": "In this paper we describe our system for SemEval-2018 Task 7 on classification of semantic relations in scientific literature for clean (subtask 1.1) and noisy data (subtask 1.2). We compare two models for classification, a C-LSTM which utilizes only word embeddings and an SVM that also takes handcrafted features into account. To adapt to the domain of science we train word embeddings on scientific papers collected from arXiv.org. The hand-crafted features consist of lexical features to model the semantic relations as well as the entities between which the relation holds. Classification of Relations using Embeddings (ClaiRE) achieved an F1 score of 74.89% for the first subtask and 78.39% for the second.", "title": "" } ]
scidocsrr
ab354e4ef234869c7e79e215eef950b7
A Survey of Colormaps in Visualization
[ { "docid": "71576ab1edd5eadbda1f34baba91b687", "text": "Visualization can make a wide range of mobile applications more intuitive and productive. The mobility context and technical limitations such as small screen size make it impossible to simply port visualization applications from desktop computers to mobile devices, but researchers are starting to address these challenges. From a purely technical point of view, building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs such as OpenGLES and increasingly powerful devices. Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches.", "title": "" }, { "docid": "4cd0d1040e104b4e317e22760b2ced71", "text": "Color mapping is an important technique used in visualization to build visual representations of data and information. With output devices such as computer displays providing a large number of colors, developers sometimes tend to build their visualization to be visually appealing, while forgetting the main goal of clear depiction of the underlying data. Visualization researchers have profited from findings in adjoining areas such as human vision and psychophysics which, combined with their own experience, enabled them to establish guidelines that might help practitioners to select appropriate color scales and adjust the associated color maps, for particular applications. This survey presents an overview on the subject of color scales by focusing on important guidelines, experimental research work and tools proposed to help non-expert users. & 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "ea544860e3c8d8b154985af822c4a9ea", "text": "Learning to walk over a graph towards a target node for a given input query and a source node is an important problem in applications such as knowledge base completion (KBC). It can be formulated as a reinforcement learning (RL) problem with a known state transition model. To overcome the challenge of sparse reward, we develop a graph-walking agent called M-Walk, which consists of a deep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN encodes the state (i.e., history of the walked path) and maps it separately to a policy, a state value and state-action Q-values. In order to effectively train the agent from sparse reward, we combine MCTS with the neural policy to generate trajectories yielding more positive rewards. From these trajectories, the network is improved in an off-policy manner using Q-learning, which modifies the RNN policy via parameter sharing. Our proposed RL algorithm repeatedly applies this policy-improvement step to learn the entire model. At test time, MCTS is again combined with the neural policy to predict the target node. Experimental results on several graph-walking benchmarks show that M-Walk is able to learn better policies than other RL-based methods, which are mainly based on policy gradients. M-Walk also outperforms traditional KBC baselines.", "title": "" }, { "docid": "ef6040561aaae594f825a6cabd4aa259", "text": "This study investigated the extent of young adults’ (N = 393; 17–30 years old) experience of cyberbullying, from the perspectives of cyberbullies and cyber-victims using an online questionnaire survey. The overall prevalence rate shows cyberbullying is still present after the schooling years. No significant gender differences were noted, however females outnumbered males as cyberbullies and cyber-victims. Overall no significant differences were noted for age, but younger participants were found to engage more in cyberbullying activities (i.e. victims and perpetrators) than the older participants. Significant differences were noted for Internet frequency with those spending 2–5 h online daily reported being more victimized and engage in cyberbullying than those who spend less than an hour daily. Internet frequency was also found to significantly predict cyber-victimization and cyberbullying, indicating that as the time spent on Internet increases, so does the chances to be bullied and to bully someone. Finally, a positive significant association was observed between cyber-victims and cyberbullies indicating that there is a tendency for cyber-victims to become cyberbullies, and vice versa. Overall it can be concluded that cyberbullying incidences are still taking place, even though they are not as rampant as observed among the younger users. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9c2debf407dce58d77910ccdfc55a633", "text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.", "title": "" }, { "docid": "f5ccb75eed1be1d5c0c8e98b5fcf565c", "text": "In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306, 165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images.", "title": "" }, { "docid": "e120320dbe8fa0e2475b96a0b07adec8", "text": "BACKGROUND\nProne hip extension (PHE) is a common and widely accepted test used for assessment of the lumbo-pelvic movement pattern. Considerable increased in lumbar lordosis during this test has been considered as impairment of movement patterns in lumbo-pelvic region. The purpose of this study was to investigate the change of lumbar lordosis in PHE test in subjects with and without low back pain (LBP).\n\n\nMETHOD\nA two-way mixed design with repeated measurements was used to investigate the lumbar lordosis changes during PHE in two groups of subjects with and without LBP. An equal number of subjects (N = 30) were allocated to each group. A standard flexible ruler was used to measure the size of lumbar lordosis in prone-relaxed position and PHE test in each group.\n\n\nRESULT\nThe result of two-way mixed-design analysis of variance revealed significant health status by position interaction effect for lumbar lordosis (P < 0.001). The main effect of test position on lumbar lordosis was statistically significant (P < 0.001). The lumbar lordosis was significantly greater in the PHE compared to prone-relaxed position in both subjects with and without LBP. The amount of difference in positions was statistically significant between two groups (P < 0.001) and greater change in lumbar lordosis was found in the healthy group compared to the subjects with LBP.\n\n\nCONCLUSIONS\nGreater change in lumbar lordosis during this test may be due to more stiffness in lumbopelvic muscles in the individuals with LBP.", "title": "" }, { "docid": "d3b501c19b65d276ec6f349b35f4da1f", "text": "The design of a macroscope constructed with photography lenses is described and several applications are demonstrated. The macroscope incorporates epi-illumination, a 0.4 numerical aperture, and a 40 mm working distance for imaging wide fields in the range of 1.5-20 mm in diameter. At magnifications of 1X to 2.5X, fluorescence images acquired with the macroscope were 100-700 times brighter than those obtained with commercial microscope objectives at similar magnifications. In several biological applications, the improved light collection efficiency (20-fold, typical) not only minimized bleaching effects, but, in concert with improved illumination throughput (15-fold, typical), significantly enhanced object visibility as well. Reduced phototoxicity and increased signal-to-noise ratios were observed in the in vivo real-time optical imaging of cortical activity using voltage-sensitive dyes. Furthermore, the macroscope has a depth of field which is 5-10 times thinner than that of a conventional low-power microscope. This shallow depth of field has facilitated the imaging of cortical architecture based on activity-dependent intrinsic cortical signals in the living primate brain. In these reflection measurements large artifacts from the surface blood vessels, which were observed with conventional lenses, were eliminated with the macroscope.", "title": "" }, { "docid": "6508fc8732fd22fde8c8ac180a2e19e3", "text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "title": "" }, { "docid": "1f6bf9c06b7ee774bc08848293b5c94a", "text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "184596076bf83518c3cf3f693e62cad7", "text": "High-K (HK) and Metal-Gate (MG) transistor reliability is very challenging both from the standpoint of introduction of new materials and requirement of higher field of operation for higher performance. In this paper, key and unique HK+MG intrinsic transistor reliability mechanisms observed on 32nm logic technology generation is presented. We'll present intrinsic reliability similar to or better than 45nm generation.", "title": "" }, { "docid": "abc2af0f9c4d94f6f7da6126c2146057", "text": "Steganography is the art of hiding the fact that communication is taking place, by hiding information in other information. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. For hiding secret information in images, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. Different applications have different requirements of the steganography technique used. For example, some applications may require absolute invisibility of the secret information, while others require a larger secret message to be hidden. This paper intends to give an overview of image steganography, its uses and techniques. It also attempts to identify the requirements of a good steganographic algorithm and briefly reflects on which steganographic techniques are more suitable for which applications.", "title": "" }, { "docid": "a110e4872095e8daf0974fa9cb051c39", "text": "The present study provides the first evidence that illiteracy can be reliably predicted from standard mobile phone logs. By deriving a broad set of mobile phone indicators reflecting users’ financial, social and mobility patterns we show how supervised machine learning can be used to predict individual illiteracy in an Asian developing country, externally validated against a large-scale survey. On average the model performs 10 times better than random guessing with a 70% accuracy. Further we show how individual illiteracy can be aggregated and mapped geographically at cell tower resolution. Geographical mapping of illiteracy is crucial to know where the illiterate people are, and where to put in resources. In underdeveloped countries such mappings are often based on out-dated household surveys with low spatial and temporal resolution. One in five people worldwide struggle with illiteracy, and it is estimated that illiteracy costs the global economy more than $1 trillion dollars each year [1]. These results potentially enable costeffective, questionnaire-free investigation of illiteracy-related questions on an unprecedented scale.", "title": "" }, { "docid": "bade302d28048eeb0578e5289e7dba23", "text": "The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry. HPC Component Architecture 4", "title": "" }, { "docid": "59e3a7004bd2e1e75d0b1c6f6d2a67d0", "text": "Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.", "title": "" }, { "docid": "63b0924fca1a50d5401a9ed7799cbe45", "text": "A universal input device for both text and Braille input was developed in a Glove-typed interface using all the joints of the four fingers and thumbs of both hands. The glove-typed device works as of now for input of Korean characters, numbers, and Braille characters using mode conversion. Considering the finger force and the fatigue from repeated finger motions, the input switch was made of conductible silicon ink, which is easy to apply to any type of surface, light, and enduring. The usability testing with (1) blind subjects showed the performance matching with a commercial Braille keypad, and (2) non-blind subjects for Korean characters showed comparable performance with cellular phone input keypads, but inferior to conventional keyboard. Subjects' performance showed that the chording gloves can input approximately 122 Braille characters per minute and 108 words per minute in Korean character. The chording gloves developed in our study is expected to be used with common computing devices such as PCs and PDAs, and can contribute to replacing the Braille-based note-takers with less expensive computing devices for blind users.", "title": "" }, { "docid": "64f4a275dce1963b281cd0143f5eacdc", "text": "Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.", "title": "" }, { "docid": "985782304a1f52fdcedd04d4c239490d", "text": "Assisted living systems can help support elderly persons with their daily activities in order to help them maintain healthy and safety while living independently. However, most current systems are ineffective in actual situation, difficult to use and have a low acceptance rate. There is a need for an assisted living solution to become intelligent and also practical issues such as user acceptance and usability need to be resolved in order to truly assist elderly people. Small, inexpensive and low-powered consumption sensors are now available which can be used in assisted living applications to provide sensitive and responsive services based on users current environments and situations. This paper aims to address the issue of how to develop an activity recognition method for a practical assisted living system in term of user acceptance, privacy (non-visual) and cost. The paper proposes an activity recognition and classification method for detection of Activities of Daily Livings (ADLs) of an elderly person using small, low-cost, non-intrusive non-stigmatize wrist worn sensors. Experimental results demonstrate that the proposed method can achieve a high classification rate (>90%). Statistical tests are employed to support this high classification rate of the proposed method. Also, we prove that by combining data from temperature sensor and/or altimeter with accelerometer, classification accuracy can be improved. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4e16f0da61a79d73354306c3b705ef46", "text": "To study information fusion method under complex environment, a highly intelligent information fusion model is put forward- the NFE model. This model is an organic combination of neural network, fuzzy reasoning and expert system. Considering various factors which influence the performance of sensor, the NFE confidence estimator is presented from the engineering application point of view. This paper presents principles, frameworks as well as key algorithms of the NFE model, and makes a comparison between the NFE model and traditional fuzzy neural network model. Simulation experiments prove that the NFE model can realize target recognition more effectively against sensor failures and strong interference, and the results are more reliable.", "title": "" }, { "docid": "7c482427e4f0305c32210093e803eb78", "text": "A healable transparent capacitive touch screen sensor has been fabricated based on a healable silver nanowire-polymer composite electrode. The composite electrode features a layer of silver nanowire percolation network embedded into the surface layer of a polymer substrate comprising an ultrathin soldering polymer layer to confine the nanowires to the surface of a healable Diels-Alder cycloaddition copolymer and to attain low contact resistance between the nanowires. The composite electrode has a figure-of-merit sheet resistance of 18 Ω/sq with 80% transmittance at 550 nm. A surface crack cut on the conductive surface with 18 Ω is healed by heating at 100 °C, and the sheet resistance recovers to 21 Ω in 6 min. A healable touch screen sensor with an array of 8×8 capacitive sensing points is prepared by stacking two composite films patterned with 8 rows and 8 columns of coupling electrodes at 90° angle. After deliberate damage, the coupling electrodes recover touch sensing function upon heating at 80 °C for 30 s. A capacitive touch screen based on Arduino is demonstrated capable of performing quick recovery from malfunction caused by a razor blade cutting. After four cycles of cutting and healing, the sensor array remains functional.", "title": "" }, { "docid": "a61b2fc98a6754ede38865479a2d0b6f", "text": "Virtualization is a hot topic in the technology world. The technology enables a single computer to run multiple operating systems simultaneously. It lets companies use a single server for multiple tasks that would normally have to run on multiple servers, each running a different OS. Now, vendors are releasing products based on two lightweight virtualization approaches that also let a single operating system run several instances of the same OS or different OSs. However, today's new virtualization approaches do not try to emulate an entire hardware environment, as traditional virtualization does. They thus require fewer CPU and memory resources, which is why the technology is called \"lightweight\" virtualization. However, lightweight virtualization still faces several barriers to widespread adoption.", "title": "" }, { "docid": "c3195ff8dc6ca8c130f5a96ebe763947", "text": "The recent emergence of Cloud Computing has drastically altered everyone’s perception of infrastructure architectures, software delivery and development models. Projecting as an evolutionary step, following the transition from mainframe computers to client/server deployment models, cloud computing encompasses elements from grid computing, utility computing and autonomic computing, into an innovative deployment architecture. This rapid transition towards the clouds, has fuelled concerns on a critical issue for the success of information systems, communication and information security. From a security perspective, a number of unchartered risks and challenges have been introduced from this relocation to the clouds, deteriorating much of the effectiveness of traditional protection mechanisms. As a result the aim of this paper is twofold; firstly to evaluate cloud security by identifying unique security requirements and secondly to attempt to present a viable solution that eliminates these potential threats. This paper proposes introducing a Trusted Third Party, tasked with assuring specific security characteristics within a cloud environment. The proposed solution calls upon cryptography, specifically Public Key Infrastructure operating in concert with SSO and LDAP, to ensure the authentication, integrity and confidentiality of involved data and communications. The solution, presents a horizontal level of service, available to all implicated entities, that realizes a security mesh, within which essential trust is maintained.", "title": "" } ]
scidocsrr
37dd50d1e3dc40f4735b0f27f5c9ea37
Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms
[ { "docid": "aa8c85df6cf5291f98b707b995ec1768", "text": "http://www.sciencemag.org/cgi/content/full/313/5786/504 version of this article at: including high-resolution figures, can be found in the online Updated information and services, http://www.sciencemag.org/cgi/content/full/313/5786/504/DC1 can be found at: Supporting Online Material found at: can be related to this article A list of selected additional articles on the Science Web sites http://www.sciencemag.org/cgi/content/full/313/5786/504#related-content http://www.sciencemag.org/cgi/content/full/313/5786/504#otherarticles , 6 of which can be accessed for free: cites 8 articles This article 15 article(s) on the ISI Web of Science. cited by This article has been http://www.sciencemag.org/cgi/content/full/313/5786/504#otherarticles 4 articles hosted by HighWire Press; see: cited by This article has been http://www.sciencemag.org/about/permissions.dtl in whole or in part can be found at: this article permission to reproduce of this article or about obtaining reprints Information about obtaining", "title": "" } ]
[ { "docid": "1368ea6ddef1ac1c37261a532d630b7a", "text": "Synthetic aperture radar automatic target recognition (SAR-ATR) has made great progress in recent years. Most of the established recognition methods are supervised, which have strong dependence on image labels. However, obtaining the labels of radar images is expensive and time-consuming. In this paper, we present a semi-supervised learning method that is based on the standard deep convolutional generative adversarial networks (DCGANs). We double the discriminator that is used in DCGANs and utilize the two discriminators for joint training. In this process, we introduce a noisy data learning theory to reduce the negative impact of the incorrectly labeled samples on the performance of the networks. We replace the last layer of the classic discriminators with the standard softmax function to output a vector of class probabilities so that we can recognize multiple objects. We subsequently modify the loss function in order to adapt to the revised network structure. In our model, the two discriminators share the same generator, and we take the average value of them when computing the loss function of the generator, which can improve the training stability of DCGANs to some extent. We also utilize images of higher quality from the generated images for training in order to improve the performance of the networks. Our method has achieved state-of-the-art results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and we have proved that using the generated images to train the networks can improve the recognition accuracy with a small number of labeled samples.", "title": "" }, { "docid": "5945081c099c883d238dca2a1dfc821e", "text": "Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5 % of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.", "title": "" }, { "docid": "46a1dd05e29e206b9744bf15d48f5a5e", "text": "In this paper, we propose a distributed version of the Hungarian method to solve the well-known assignment problem. In the context of multirobot applications, all robots cooperatively compute a common assignment that optimizes a given global criterion (e.g., the total distance traveled) within a finite set of local computations and communications over a peer-to-peer network. As a motivating application, we consider a class of multirobot routing problems with “spatiotemporal” constraints, i.e., spatial targets that require servicing at particular time instants. As a means of demonstrating the theory developed in this paper, the robots cooperatively find online suboptimal routes by applying an iterative version of the proposed algorithm in a distributed and dynamic setting. As a concrete experimental test bed, we provide an interactive “multirobot orchestral” framework, in which a team of robots cooperatively plays a piece of music on a so-called orchestral floor.", "title": "" }, { "docid": "44abac09424c717f3a691e4ba2640c1a", "text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.", "title": "" }, { "docid": "d786d4cb7b57885bc0bb2c2bfd892336", "text": "Problem statement: Clustering is one of the most important research ar eas in the field of data mining. Clustering means creating groups of ob jects based on their features in such a way that th e objects belonging to the same groups are similar an d those belonging to different groups are dissimila r. Clustering is an unsupervised learning technique. T he main advantage of clustering is that interesting patterns and structures can be found directly from very large data sets with little or none of the background knowledge. Clustering algorithms can be applied in many domains. Approach: In this research, the most representative algorithms K-Mean s and K-Medoids were examined and analyzed based on their basic approach. The best algorithm i n each category was found out based on their performance. The input data points are generated by two ways, one by using normal distribution and another by applying uniform distribution. Results: The randomly distributed data points were taken as input to these algorithms and clusters are found ou t for each algorithm. The algorithms were implement ed using JAVA language and the performance was analyze d based on their clustering quality. The execution time for the algorithms in each category was compar ed for different runs. The accuracy of the algorith m was investigated during different execution of the program on the input data points. Conclusion: The average time taken by K-Means algorithm is greater than the time taken by K-Medoids algorithm for both the case of normal and uniform distributions. The r esults proved to be satisfactory.", "title": "" }, { "docid": "d87295095ef11648890b19cd0608d5da", "text": "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.", "title": "" }, { "docid": "16c6f17163718910f9bd48f8dbf36af8", "text": "Amanita muscaria ist ein Pilz mit einer leuchtend roten oder orangen Kappe mit kleinen weißen Flecken. Er enthält Isoxazol-Derivate, Ibotensäure, Muskimol und Muscazon und andere Toxine, wie etwa Muskarin. Die Dauer der klinischen Symptome nach dem Verzehr von Amanita muscaria ist üblicherweise nicht länger als 24 Stunden. Wir berichten über eine über fünf Tage anhaltende paranoide Psychose nach der Einnahme von Amanita muscaria. Ein 48-jähriger Mann mit völlig blander medizinischer Anamnese sammelte und aß Pilze, die er als Amanita caesarea agnostizierte. Eine halbe Stunde nach der Einnahme begann er zu erbrechen – anschließend schlief er ein. Er wurde komatös mit krampfartigen Zustand aufgefunden. Bei Eintreffen im Spital war er komatös – seine sonstige klinisch physikalische und neurologische Untersuchung ergab einen normalen Befund. Die Creatin-Kinase betrug 8,33 μkat/l. Die übrigen Laborbefunde und das Schädel-CT waren normal. Das toxikologische Screening ergab keinen Hinweis auf Medikamente im Blut und im Harn. Unser Pilzexperte identifizierte Amanita muscaria in den übrig gebliebenen Pilzen. Dem Patienten wurde aktivierte Tierkohle gegeben. 10 Stunden nach Einnahme des Pilzes erwachte er. Zu diesem Zeitpunkt schien er völlig orientiert. 18 Stunden nach der Einnahme verschlechterte sich der Zustand wieder; der Patient wurde verwirrt und zunehmend unkooperativ. Danach trat ein paranoid psychotisches Zustandsbild mit visuellen und akustischen Halluzinationen ein, welches fünf Tage lang anhielt. Ab dem sechsten Tag nach Einnahme begannen die psychotischen Symptome zu verschwinden. Ein Jahr später ist der Patient ohne jede Therapie und ohne Symptome psychiatrischer Erkrankung. Wir folgern daraus, dass eine paranoide Psychose mit visuellen und akustischen Halluzinationen noch 18 Stunden nach Einnahme von Amanita muscaria auftreten und bis zu fünf Tage lang anhalten kann. Amanita muscaria has a bright red or orange cap covered with small white plaques. It contains the isoxazole derivatives ibotenic acid, muscimol and muscazone and other toxins such as muscarine. The duration of clinical manifestations after A. muscaria ingestion does not usually exceed 24 hours; we report on a 5-day paranoid psychosis after A. muscaria ingestion. A 48-year-old man, with no previous medical history, gathered and ate mushrooms he presumed to be A. caesarea. Half an hour later he started to vomit and fell asleep. He was found comatose having a seizure-like episode. On admission four hours after ingestion he was comatose, but the remaining physical and neurological examinations were unremarkable. Creatine kinase was 8.33 μkat/l. Other laboratory results and brain CT scan were normal. Toxicology analysis did not find any drugs in his blood or urine. The mycologist identified A. muscaria among the remaining mushrooms. The patient was given activated charcoal. Ten hours after ingestion, he awoke and was completely orientated; 18 hours after ingestion his condition deteriorated again and he became confused and uncooperative. Afterwards paranoid psychosis with visual and auditory hallucinations appeared and persisted for five days. On the sixth day all symptoms of psychosis gradually disappeared. One year later he is not undergoing any therapy and has no symptoms of psychiatric disease. We conclude that paranoid psychosis with visual and auditory hallucinations can appear 18 hours after ingestion of A. muscaria and can last for up to five days.", "title": "" }, { "docid": "5064d758b361171310ac31c323aa734b", "text": "The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information.", "title": "" }, { "docid": "92abe28875dbe72fbc16bdf41b324126", "text": "We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1", "title": "" }, { "docid": "29360e31131f37830e0d6271bab63a6e", "text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.", "title": "" }, { "docid": "8fc40076329fce9d6ebcbddd96d37c41", "text": "Deep neural networks have achieved remarkable success in single image super-resolution (SISR). The computing and memory requirements of these methods have hindered their application to broad classes of real devices with limited computing power, however. One approach to this problem has been lightweight network architectures that balance the super-resolution performance and the computation burden. In this study, we revisit this problem from an orthogonal view, and propose a novel learning strategy to maximize the pixel-wise fitting capacity of a given lightweight network architecture. Considering that the initial capacity of the lightweight network is very limited, we present an adaptive importance learning scheme for SISR that trains the network with an easy-to-complex paradigm by dynamically updating the importance of image pixels on the basis of the training loss. Specifically, we formulate the network training and the importance learning into a joint optimization problem. With a carefully designed importance penalty function, the importance of individual pixels can be gradually increased through solving a convex optimization problem. The training process thus begins with pixels that are easy to reconstruct, and gradually proceeds to more complex pixels as fitting improves. Furthermore, the proposed learning scheme is able to seamlessly assimilate knowledge from a more powerful teacher network in the form of importance initialization, thus obtaining better initial capacity in the network. Through learning the network parameters, and updating pixel importance, the proposed learning scheme enables smaller, lightweight, networks to achieve better performance than has previously been possible. Extensive experiments on four benchmark datasets demonstrate the potential benefits ∗ Corresponding author. 1 School of Computer Science, The University of Adelaide, Australia 2 School of Computer Science, Northwestern Polytechnical University, Xian, China of the proposed learning strategy in lightweight SISR network enhancement. In some cases, our learned network with only 25% of the parameters and computational complexity can produce comparable or even better results than the corresponding full-parameter network.", "title": "" }, { "docid": "d7108ba99aaa9231d926a52617baa712", "text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.", "title": "" }, { "docid": "461a4911e3dedf13db369d2b85861f77", "text": "This paper proposes a novel approach using a coarse-to-fine analysis strategy for sentence-level emotion classification which takes into consideration of similarities to sentences in training set as well as adjacent sentences in the context. First, we use intra-sentence based features to determine the emotion label set of a target sentence coarsely through the statistical information gained from the label sets of the k most similar sentences in the training data. Then, we use the emotion transfer probabilities between neighboring sentences to refine the emotion labels of the target sentences. Such iterative refinements terminate when the emotion classification converges. The proposed algorithm is evaluated on Ren-CECps, a Chinese blog emotion corpus. Experimental results show that the coarse-to-fine emotion classification algorithm improves the sentence-level emotion classification by 19.11% on the average precision metric, which outperforms the baseline methods.", "title": "" }, { "docid": "a870a0628c57f56c8162ff4495bec540", "text": "We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers.", "title": "" }, { "docid": "a5d8fa2e03cb51b30013a9e21477ef61", "text": "PURPOSE\nThe aim of this study was to establish the role of magnetic resonance imaging (MRI) in patients with Mayer-Rokitansky-Kuster-Hauser syndrome (MRKHS).\n\n\nMATERIALS AND METHODS\nSixteen female MRKHS patients (mean age, 19.4 years; range, 11-39 years) were studied using MRI. Two experienced radiologists evaluated all the images in consensus to assess the presence or absence of the ovaries, uterus, and vagina. Additional urogenital or vertebral pathologies were also noted.\n\n\nRESULTS\nOf the 16 patients, complete aplasia of uterus was seen in five patients (31.3%). Uterine hypoplasia or remnant uterus was detected in 11 patients (68.8%). Ovaries were clearly seen in 10 patients (62.5%), and in two of the 10 patients, no descent of ovaries was detected. In five patients, ovaries could not be detected on MRI. In one patient, agenesis of right ovary was seen, and the left ovary was in normal shape. Of the 16 cases, 11 (68.8%) had no other extragenital abnormalities. Additional abnormalities were detected in six patients (37.5%). Two of the six had renal agenesis, and one patient had horseshoe kidney; renal ectopy was detected in two patients, and one patient had urachal remnant. Vertebral abnormalities were detected in two patients; one had L5 posterior fusion defect, bilateral hemisacralization, and rotoscoliosis, and the other had coccygeal vertebral fusion.\n\n\nCONCLUSION\nMRI is a useful and noninvasive imaging method in the diagnosis and evaluation of patients with MRKHS.", "title": "" }, { "docid": "b958af84a3f977ea4c3efd854bd7de48", "text": "This paper presents the novel development of an embedded system that aims at digital TV content recommendation based on descriptive metadata collected from versatile sources. The described system comprises a user profiling subsystem identifying user preferences and a user agent subsystem performing content rating. TV content items are ranked using a combined multimodal approach integrating classification-based and keyword-based similarity predictions so that a user is presented with a limited subset of relevant content. Observable user behaviors are discussed as instrumental in user profiling and a formula is provided for implicitly estimating the degree of user appreciation of content. A new relation-based similarity measure is suggested to improve categorized content rating precision. Experimental results show that our system can recommend desired content to users with significant amount of accuracy.", "title": "" }, { "docid": "8649d115dea8cb6b3353745476b5c57d", "text": "OBJECTIVES\nTo test a brief, non-sectarian program of meditation training for effects on perceived stress and negative emotion, and to determine effects of practice frequency and test the moderating effects of neuroticism (emotional lability) on treatment outcome.\n\n\nDESIGN AND SETTING\nThe study used a single-group, open-label, pre-test post-test design conducted in the setting of a university medical center.\n\n\nPARTICIPANTS\nHealthy adults (N=200) interested in learning meditation for stress-reduction were enrolled. One hundred thirty-three (76% females) completed at least 1 follow-up visit and were included in data analyses.\n\n\nINTERVENTION\nParticipants learned a simple mantra-based meditation technique in 4, 1-hour small-group meetings, with instructions to practice for 15-20 minutes twice daily. Instruction was based on a psychophysiological model of meditation practice and its expected effects on stress.\n\n\nOUTCOME MEASURES\nBaseline and monthly follow-up measures of Profile of Mood States; Perceived Stress Scale; State-Trait Anxiety Inventory (STAI); and Brief Symptom Inventory (BSI). Practice frequency was indexed by monthly retrospective ratings. Neuroticism was evaluated as a potential moderator of treatment effects.\n\n\nRESULTS\nAll 4 outcome measures improved significantly after instruction, with reductions from baseline that ranged from 14% (STAI) to 36% (BSI). More frequent practice was associated with better outcome. Higher baseline neuroticism scores were associated with greater improvement.\n\n\nCONCLUSIONS\nPreliminary evidence suggests that even brief instruction in a simple meditation technique can improve negative mood and perceived stress in healthy adults, which could yield long-term health benefits. Frequency of practice does affect outcome. Those most likely to experience negative emotions may benefit the most from the intervention.", "title": "" }, { "docid": "9ed378ac0420b3ec29cd830355e65ee7", "text": "Drawing on the Theory of Planned Behavior (TPB), this research investigates two factors that drive an employee to comply with requirements of the information security policy (ISP) of her organization with regards to protecting information and technology resources: an employee’s information security awareness (ISA) and her perceived fairness of the requirements of the ISP. Our results, which is based on the PLS analysis of data collected from 464 participants, show that ISA and perceived fairness positively affect attitude, and in turn attitude positively affects intention to comply. ISA also has an indirect impact on attitude since it positively influences perceived fairness. As organizations strive to get their employees to follow their information security rules and regulations, our study sheds light on the role of an employee’s ISA and procedural fairness with regards to security rules and regulations in the workplace.", "title": "" }, { "docid": "387e9609e2fe3c6893b8ce0a1613f98a", "text": "Many fault-tolerant and intrusion-tolerant systems require the ability to execute unsafe programs in a realistic environment without leaving permanent damages. Virtual machine technology meets this requirement perfectly because it provides an execution environment that is both realistic and isolated. In this paper, we introduce an OS level virtual machine architecture for Windows applications called Feather-weight Virtual Machine (FVM), under which virtual machines share as many resources of the host machine as possible while still isolated from one another and from the host machine. The key technique behind FVM is namespace virtualization, which isolates virtual machines by renaming resources at the OS system call interface. Through a copy-on-write scheme, FVM allows multiple virtual machines to physically share resources but logically isolate their resources from each other. A main technical challenge in FVM is how to achieve strong isolation among different virtual machines and the host machine, due to numerous namespaces and interprocess communication mechanisms on Windows. Experimental results demonstrate that FVM is more flexible and scalable, requires less system resource, incurs lower start-up and run-time performance overhead than existing hardware-level virtual machine technologies, and thus makes a compelling building block for security and fault-tolerant applications.", "title": "" } ]
scidocsrr
65e8a18f254440fba764e0f2eabc8f1d
A comparison of named entity recognition tools applied to biographical texts
[ { "docid": "ac09e4a989bb4a9b247aa0ba346f1d71", "text": "Many applications in information extraction, natural language understanding, information retrieval require an understanding of the semantic relations between entities. We present a comprehensive review of various aspects of the entity relation extraction task. Some of the most important supervised and semi-supervised classification approaches to the relation extraction task are covered in sufficient detail along with critical analyses. We also discuss extensions to higher-order relations. Evaluation methodologies for both supervised and semi-supervised methods are described along with pointers to the commonly used performance evaluation datasets. Finally, we also give short descriptions of two important applications of relation extraction, namely question answering and biotext mining.", "title": "" }, { "docid": "d38df66fe85b4d12093965e649a70fe1", "text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.", "title": "" } ]
[ { "docid": "c8d33f21915a6f1403f046ffa17b6e2e", "text": "Synthetic aperture radar (SAR) image segmentation is a difficult problem due to the presence of strong multiplicative noise. To attain multi-region segmentation for SAR images, this paper presents a parametric segmentation method based on the multi-texture model with level sets. Segmentation is achieved by solving level set functions obtained from minimizing the proposed energy functional. To fully utilize image information, edge feature and region information are both included in the energy functional. For the need of level set evolution, the ratio of exponentially weighted averages operator is modified to obtain edge feature. Region information is obtained by the improved edgeworth series expansion, which can adaptively model a SAR image distribution with respect to various kinds of regions. The performance of the proposed method is verified by three high resolution SAR images. The experimental results demonstrate that SAR images can be segmented into multiple regions accurately without any speckle pre-processing steps by the proposed method.", "title": "" }, { "docid": "acd7a0c781003597883b453cbb816ead", "text": "This paper presents the design techniques and realization examples of innovative multilayer substrate integrated waveguide (SIW) structures for integrated wireless system applications. Such multilayered SIW implementation presents numerous advantages such as low profile, light weight, wideband characteristics, and easy integration with other devices and components. In this paper, the state-of-the-art of multilayer SIW passive components for low-cost high-density integrated transceiver design are presented. Filters, couplers, phase shifters, power dividers, and antenna arrays designed for specific applications are discussed and the advantages gained from multilayer schemes are described. Despite their easy fabrications and outstanding performances, these technologies are still struggling to compete with others for potential mainstream solutions. In this paper, we also discuss challenging issues in the development of multilayer SIW integrated modules that should enable a near-future successful widespread deployment.", "title": "" }, { "docid": "04f4c18860a98284de6d6a7e66592336", "text": "According to published literature : “Actigraphy is a non-invasive method of monitoring human rest/activity cycles. A small actigraph unit, also called an actimetry sensor is worn for a week or more to measure gross motor activity. The unit is usually, in a wrist-watch-like package, worn on the wrist. The movements the actigraph unit undergoes are continually recorded and some units also measure light exposure. The data can be later read to a computer and analysed offline; in some brands of sensors the data are transmitted and analysed in real time.”[1-9].We are interested in focusing on the above mentioned research topic as per the title of this communication.Interested in suggesting an informatics and computational framework in the context of Actigraphy using ImageJ/Actigraphy Plugin by using JikesRVM as the Java Virtual Machine.", "title": "" }, { "docid": "28a185e08ec254647f8f6c6ad9160264", "text": "0079-6565/$ see front matter Published by Elsevier doi:10.1016/j.pnmrs.2008.12.001 Abbreviations: NMR, Nuclear Magnetic Resonan RMSD, mean square deviation; HSQC, heteronuclea spectroscopy; NOE, Nuclear Overhauser Effect; RDC, re Protein Data Bank; pol g, zinc finger domain of the hu CH, C Ha; hSRI, human Set2-Rpb1 interacting do human transcription elongation factor CA150 (RN domain interacting protein); POF, principal order fram MD, molecular dynamics; SSE, secondary structure e WPS, well-packed satisfying; vdW, van der Waals; DO * Corresponding author. Tel.: +1 919 660 6583. E-mail address: brd+pnmrs09@cs.duke.edu (B.R. D URL: http://www.cs.duke.edu/brd (B.R. Donald).", "title": "" }, { "docid": "289849c6cb55ed61d28c8fe5132fedde", "text": "An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is described. The method is based on multi-dimensional second generation wavelets, and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems [Int. J. Comp. Fluid Dyn. 17 (2003) 151]. Wavelet decomposition is used for grid adaptation and interpolation, while a hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the solver, an iterative procedure analogous to the multigrid algorithm is developed. The overall computational complexity of the solver is O(N ), where N is the number of adapted grid points. The accuracy and computational efficiency of the method are demonstrated for the solution of twoand three-dimensional elliptic test problems.", "title": "" }, { "docid": "a4030b9aa31d4cc0a2341236d6f18b5a", "text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.", "title": "" }, { "docid": "585d45d891c3e2344e6ad47822c9ee80", "text": "Sequence generation models for dialogue are known to have several problems: they tend to produce short, generic sentences that are uninformative and unengaging. Retrieval models on the other hand can surface interesting responses, but are restricted to the given retrieval set leading to erroneous replies that cannot be tuned to the specific context. In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context. We show on the recent CONVAI2 challenge task our approach produces responses superior to both standard retrieval and generation models in human evaluations.", "title": "" }, { "docid": "ac1e49efb5cc9c7d6e227c7e8f33e44e", "text": "Sentiment analysis refers to the class of computational and natural language processing based techniques used to identify, extract or characterize subjective information, such as opinions, expressed in a given piece of text. The main purpose of sentiment analysis is to classify a writer’s attitude towards various topics into positive, negative or neutral categories. Sentiment analysis has many applications in different domains including, but not limited to, business intelligence, politics, sociology, etc. Recent years, on the other hand, have witnessed the advent of social networking websites, microblogs, wikis and Web applications and consequently, an unprecedented growth in user-generated data is poised for sentiment mining. Data such as web-postings, Tweets, videos, etc., all express opinions on various topics and events, offer immense opportunities to study and analyze human opinions and sentiment. In this chapter, we study the information published by individuals in social media in cases of natural disasters and emergencies and investigate if such information could be used by first responders to improve situational awareness and crisis management. In particular, we explore applications of sentiment analysis and demonstrate how sentiment mining in social media can be exploited to determine how local crowds react during a disaster, and how such information can be used to improve disaster management. Such information can also be used to help assess the extent of the devastation and find people who are in specific need during an emergency situation. We first provide the formal definition of sentiment analysis in social media and cover traditional and the state-of-the-art approaches while highlighting contributions, shortcomings, and pitfalls due to the composition of online media streams. Next we discuss the relationship among social media, disaster relief and situational awareness and explain how social media is used in these contexts with the focus on sentiment analysis. In order to enable quick analysis of real-time geo-distributed data, we will detail applications of visual analytics with an emphasis on sentiment visualization. Finally, we conclude the chapter with a discussion of research challenges in sentiment analysis and its application in disaster relief.", "title": "" }, { "docid": "eb228251938f240cdcf7fed80e3079a6", "text": "We introduce an approach to biasing language models towards known contexts without requiring separate language models or explicit contextually-dependent conditioning contexts. We do so by presenting an alternative ASR objective, where we predict the acoustics and words given the contextual cue, such as the geographic location of the speaker. A simple factoring of the model results in an additional biasing term, which effectively indicates how correlated a hypothesis is with the contextual cue (e.g., given the hypothesized transcript, how likely is the user’s known location). We demonstrate that this factorization allows us to train relatively small contextual models which are effective in speech recognition. An experimental analysis shows a perplexity reduction of up to 35% and a relative reduction in word error rate of 1.6% on a targeted voice search dataset when using the user’s coarse location as a contextual cue.", "title": "" }, { "docid": "9c67049b5f934b47346592b73bc57dbe", "text": "In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.", "title": "" }, { "docid": "5d794e791f571d7d9bc075dc57b22c61", "text": "STUDY DESIGN\nA quantitative biomechanical comparison of seven different lumbar spine \"stabilization exercises.\"\n\n\nOBJECTIVES\nThe purpose of this research was to quantify lumbar spine stability resulting from the muscle activation patterns measured when performing selected stabilization exercises.\n\n\nSUMMARY OF BACKGROUND DATA\nMany exercises are termed \"stabilization exercises\" for the low back; however, limited attempts have been made to quantify spine stability and the resultant tissue loading. Ranking resultant stability together with spinal load is very helpful for guiding clinical decision-making and therapeutic exercise design.\n\n\nMETHODS\nEight stabilization exercises were quantified in this study. Spine kinematics, external forces, and 14 channels of torso EMG were recorded for each exercise. These data were input into a modified version of a lumbar spine model described by Cholewicki and McGill (1996) to quantify stability and L4-L5 compression.\n\n\nRESULTS\nA rank order of the various exercises was produced based on stability, muscle activation levels, and lumbar compression.\n\n\nCONCLUSIONS\nQuantification of the calibrated muscle activation levels together with low back compression and resultant stability assists clinical decisions regarding the most appropriate exercise for specific patients and specific objectives.", "title": "" }, { "docid": "244ae725a4dffb70d71fdb5c5382d2c3", "text": ".................................................................................................................................... i Acknowledgements ................................................................................................................. iii List of Abbreviations .............................................................................................................. vi List of Figures ........................................................................................................................ vii List of Tables ......................................................................................................................... viii", "title": "" }, { "docid": "b160d69d87ad113286ee432239b090d7", "text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "51f5ba274068c0c03e5126bda056ba98", "text": "Electricity is conceivably the most multipurpose energy carrier in modern global economy, and therefore primarily linked to human and economic development. Energy sector reform is critical to sustainable energy development and includes reviewing and reforming subsidies, establishing credible regulatory frameworks, developing policy environments through regulatory interventions, and creating marketbased approaches. Energy security has recently become an important policy driver and privatization of the electricity sector has secured energy supply and provided cheaper energy services in some countries in the short term, but has led to contrary effects elsewhere due to increasing competition, resulting in deferred investments in plant and infrastructure due to longer-term uncertainties. On the other hand global dependence on fossil fuels has led to the release of over 1100 GtCO2 into the atmosphere since the mid-19th century. Currently, energy-related GHG emissions, mainly from fossil fuel combustion for heat supply, electricity generation and transport, account for around 70% of total emissions including carbon dioxide, methane and some traces of nitrous oxide. This multitude of aspects play a role in societal debate in comparing electricity generating and supply options, such as cost, GHG emissions, radiological and toxicological exposure, occupational health and safety, employment, domestic energy security, and social impressions. Energy systems engineering provides a methodological scientific framework to arrive at realistic integrated solutions to complex energy problems, by adopting a holistic, systems-based approach, especially at decision making and planning stage. Modeling and optimization found widespread applications in the study of physical and chemical systems, production planning and scheduling systems, location and transportation problems, resource allocation in financial systems, and engineering design. This article reviews the literature on power and supply sector developments and analyzes the role of modeling and optimization in this sector as well as the future prospective of optimization modeling as a tool for sustainable energy systems. © 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d781c28e343d63babafb0fd1353ae62c", "text": "The present study evaluated the personality characteristics and psychopathology of internet sex offenders (ISOs) using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2) to determine whether ISO personality profiles are different to those of general sex offenders (GSOs; e.g. child molesters and rapists). The ISOs consisted of 48 convicted males referred to a private sex offender treatment facility for a psychosexual risk assessment. The GSOs consisted of 104 incarcerated non-internet or general sex offenders. Findings indicated that ISOs scored significantly lower on the following scales: L, F, Pd and Sc. A comparison of the MMPI-2 scores of the ISO and GSO groups indicated that ISOs are a heterogeneous group with considerable withingroup differences. Current findings are consistent with the existing literature on the limited utility of the MMPI-2 in differentiating between subtypes of sex offenders.", "title": "" }, { "docid": "094bb78ae482f2ad4877e53a446236f0", "text": "While the amount of available information on the Web is increasing rapidly, the problem of managing it becomes more difficult. We present two applications, Thinkbase and Thinkpedia, which aim to make Web content more accessible and usable by utilizing visualizations of the semantic graph as a means to navigate and explore large knowledge repositories. Both of our applications implement a similar concept: They extract semantically enriched contents from a large knowledge spaces (Freebase and Wikipedia respectively), create an interactive graph-based representation out of it, and combine them into one interface together with the original text based content. We describe the design and implementation of our applications, and provide a discussion based on an informal evaluation. Author", "title": "" }, { "docid": "1360ab7fef48f6913b188447aa3841b5", "text": "Optical music recognition (OMR) systems are used to convert music scanned from paper into a format suitable for playing or editing on a computer. These systems generally have two phases: recognizing the graphical symbols (such as note-heads and lines) and determining the musical meaning and relationships of the symbols (such as the pitch and rhythm of the notes). In this paper we explore the second phase and give a two-step approach that admits an economical representation of the parsing rules for the system. The approach is flexible and allows the system to be extended to new notations with little effort—the current system can parse common music notation, Sacred Harp notation and plainsong. It is based on a string grammar and a customizable graph that specifies relationships between musical objects. We observe that this graph can be related to printing as well as recognizing music notation, bringing the opportunity for cross-fertilization between the two areas of research. Copyright c © 2003 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "17287942eaf5c590b0d48b73eac7bc7c", "text": "The successof the Particle Swarm Optimization (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) hasmotivated researchers to extend the useof this bioinspired techniqueto other areas.One of them is multiobjective optimization. Despite the fact that the first proposalof a Multi-Objecti veParticle SwarmOptimizer (MOPSO) is over six years old, a considerable number of other algorithms have beenproposedsincethen. This paper presentsa comprehensi ve review of the various MOPSOsreported in the specializedliteratur e. As part of this review, we include a classificationof the approaches,and weidentify the main featuresof eachproposal. In the last part of the paper, we list someof the topicswithin this field that weconsideraspromisingareasof futur e research.", "title": "" }, { "docid": "d276064ef10fa7e400bd922bd5d110da", "text": "Masquerading or impersonation attack refers to the illegitimate activity on a computer system when one user impersonates another user. Masquerade attacks are serious in nature due to the fact that they are mostly carried by insiders and thus are extremely difficult to detect. Detection of these attacks is done by monitoring significant changes in user's behavior based on his/her profile. Currently, such profiles are based mostly on the user command line data and do not represent his/her complete behavior in a graphical user interface (GUI) based system and hence are not sufficient to quickly detect such masquerade attacks. In this paper, we present a new framework for creating a unique feature set for user behavior on GUI based systems. We have collected real user behavior data from live systems and extracted parameters to construct these feature vectors. These vectors contain user information such as mouse speed, distance, angles and amount of clicks during a user session. We model our technique of user identification and masquerade detection as a binary classification problem and use support vector machine (SVM) to learn and classify these feature vectors. We show that our technique can provide detection rates of up to 96% with few false positives based on these feature vectors. We have tested our technique with various feature vector parameters and conclude that these feature vectors can provide unique and comprehensive user behavior information and are powerful enough to detect masqueraders", "title": "" }, { "docid": "c6abeae6e9287f04b472595a47e974ad", "text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.", "title": "" } ]
scidocsrr
d038ab9f4978309049c9f90a6f294de0
Internet visual media processing: a survey with graphics and vision applications
[ { "docid": "12e5d45acb0c303845a01b006b547455", "text": "Photosketcher is an interactive system for progressively synthesizing novel images using only sparse user sketches as input. Photosketcher works on the image content exclusively; it doesn't require keywords or other metadata associated with the images. Users sketch the rough shape of a desired image part, and Photosketcher searches a large collection of images for it. The search is based on a bag-of-features approach that uses local descriptors for translation-invariant retrieval of image parts. Composition is based on user scribbles: from the scribbles, Photosketcher predicts the desired part using Gaussian mixture models and computes an optimal seam using graph cuts. To further reduce visible seams, users can blend the composite image in the gradient domain.", "title": "" }, { "docid": "8ebdf1614e9ee814dacdfe30eccb6a44", "text": "Colorization of a grayscale photograph often requires considerable effort from the user, either by placing numerous color scribbles over the image to initialize a color propagation algorithm, or by looking for a suitable reference image from which color information can be transferred. Even with this user supplied data, colorized images may appear unnatural as a result of limited user skill or inaccurate transfer of colors. To address these problems, we propose a colorization system that leverages the rich image content on the internet. As input, the user needs only to provide a semantic text label and segmentation cues for major foreground objects in the scene. With this information, images are downloaded from photo sharing websites and filtered to obtain suitable reference images that are reliable for color transfer to the given grayscale photo. Different image colorizations are generated from the various reference images, and a graphical user interface is provided to easily select the desired result. Our experiments and user study demonstrate the greater effectiveness of this system in comparison to previous techniques.", "title": "" }, { "docid": "0a761fba9fa9246261ca7627ff6afe91", "text": "Compositing is one of the most commonly performed operations in computer graphics. A realistic composite requires adjusting the appearance of the foreground and background so that they appear compatible; unfortunately, this task is challenging and poorly understood. We use statistical and visual perception experiments to study the realism of image composites. First, we evaluate a number of standard 2D image statistical measures, and identify those that are most significant in determining the realism of a composite. Then, we perform a human subjects experiment to determine how the changes in these key statistics influence human judgements of composite realism. Finally, we describe a data-driven algorithm that automatically adjusts these statistical measures in a foreground to make it more compatible with its background in a composite. We show a number of compositing results, and evaluate the performance of both our algorithm and previous work with a human subjects study.", "title": "" } ]
[ { "docid": "dd64ac591acfacb6ea514af3f104d0aa", "text": "FluMist influenza A vaccine strains contain the PB1, PB2, PA, NP, M, and NS gene segments of ca A/AA/6/60, the master donor virus-A strain. These gene segments impart the characteristic cold-adapted (ca), attenuated (att), and temperature-sensitive (ts) phenotypes to the vaccine strains. A plasmid-based reverse genetics system was used to create a series of recombinant hybrids between the isogenic non-ts wt A/Ann Arbor/6/60 and MDV-A strains to characterize the genetic basis of the ts phenotype, a critical, genetically stable, biological trait that contributes to the attenuation and safety of FluMist vaccines. PB1, PB2, and NP derived from MDV-A each expressed determinants of temperature sensitivity and the combination of all three gene segments was synergistic, resulting in expression of the characteristic MDV-A ts phenotype. Site-directed mutagenesis analysis mapped the MDV-A ts phenotype to the following four major loci: PB1(1195) (K391E), PB1(1766) (E581G), PB2(821) (N265S), and NP(146) (D34G). In addition, PB1(2005) (A661T) also contributed to the ts phenotype. The identification of multiple genetic loci that control the MDV-A ts phenotype provides a molecular basis for the observed genetic stability of FluMist vaccines.", "title": "" }, { "docid": "533dc0a0db5bd04c4270b5a896a6ad3b", "text": "OBJECTIVE\nMindfulness is a process whereby one is aware and receptive to present moment experiences. Although mindfulness-enhancing interventions reduce pathological mental and physical health symptoms across a wide variety of conditions and diseases, the mechanisms underlying these effects remain unknown. Converging evidence from the mindfulness and neuroscience literature suggests that labeling affect may be one mechanism for these effects.\n\n\nMETHODS\nParticipants (n = 27) indicated trait levels of mindfulness and then completed an affect labeling task while undergoing functional magnetic resonance imaging. The labeling task consisted of matching facial expressions to appropriate affect words (affect labeling) or to gender-appropriate names (gender labeling control task).\n\n\nRESULTS\nAfter controlling for multiple individual difference measures, dispositional mindfulness was associated with greater widespread prefrontal cortical activation, and reduced bilateral amygdala activity during affect labeling, compared with the gender labeling control task. Further, strong negative associations were found between areas of prefrontal cortex and right amygdala responses in participants high in mindfulness but not in participants low in mindfulness.\n\n\nCONCLUSIONS\nThe present findings with a dispositional measure of mindfulness suggest one potential neurocognitive mechanism for understanding how mindfulness meditation interventions reduce negative affect and improve health outcomes, showing that mindfulness is associated with enhanced prefrontal cortical regulation of affect through labeling of negative affective stimuli.", "title": "" }, { "docid": "629ad6e43be26bc2243b0b13266ae213", "text": "In recent years, data is becoming the most valuable asset. There are more and more data exchange markets on Internet. These markets help data owners publish their datasets and data consumers find appropriate services. However, different from traditional goods like clothes and food, data is a special commodity. For current data exchange markets, it is very hard to protect copyright and privacy. Moreover, maintaining data services requires special IT techniques, which is a difficult job for many organizations who own big datasets, such as hospitals, government departments, planetariums and banks. In this paper, we propose a decentralized solution for big data exchange. This solution aims at cultivating an ecosystem, inside which all participators can cooperate to exchange data in a peer-to-peer way. The core part of this solution is to utilize blockchain technology to record transaction logs and other important documents. Unlike existing data exchange markets, our solution does not need any third-parties. It also provides an convenient way for data owners to audit the use of data, in order to protect data copyright and privacy. We will explain the ecosystem, and discuss the technical challenges and corresponding solutions.", "title": "" }, { "docid": "3f3ba8970ad046686a4c0fe11820da07", "text": "Agriculture contributes to a major portion of India's GDP. Two major issues in modern agriculture are water scarcity and high labor costs. These issues can be resolved using agriculture task automation, which encourages precision agriculture. Considering abundance of sunlight in India, this paper discusses the design and development of an IoT based solar powered Agribot that automates irrigation task and enables remote farm monitoring. The Agribot is developed using an Arduino microcontroller. It harvests solar power when not performing irrigation. While executing the task of irrigation, it moves along a pre-determined path of a given farm, and senses soil moisture content and temperature at regular points. At each sensing point, data acquired from multiple sensors is processed locally to decide the necessity of irrigation and accordingly farm is watered. Further, Agribot acts as an IoT device and transmits the data collected from multiple sensors to a remote server using Wi-Fi link. At the remote server, raw data is processed using signal processing operations such as filtering, compression and prediction. Accordingly, the analyzed data statistics are displayed using an interactive interface, as per user request.", "title": "" }, { "docid": "2ae773f548c1727a53a7eb43550d8063", "text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "d699b6516696077a7caefd72a1c57bd1", "text": "In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the iedb MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non convex methods dedicated to the same problem. ∗To whom correspondance should be addressed: 35, rue Saint Honoré, F-77300 Fontainebleau, France.", "title": "" }, { "docid": "89cc631db97607dbb45c8b956e7dee2a", "text": "Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands.", "title": "" }, { "docid": "91f232a7cee24a898c9c2cf6d9938b55", "text": "In this letter, a 4 ×4 substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad axial-ratio (AR) bandwidth is designed and fabricated by multilayer printed circuit board (PCB) technology. The antenna array consists of 16 sequentially rotated elliptical cavities fed by slots on the SIW acting as the radiating elements, four 1-to-4 SIW power dividers, and a transition from a coaxial cable to the SIW. The widened AR bandwidth of the antenna array is achieved by using an improved SIW power divider. The antenna prototype was fabricated and measured, and the discrepancies between simulations and measurements are carefully analyzed.", "title": "" }, { "docid": "f102d68c17e5882eeeca84aa2a677921", "text": "In this paper we propose a vision-based stabilization and output tracking control method for a model helicopter. A novel two-camera method is introduced for estimating the full six-degrees-of-freedom pose of the helicopter. One of these cameras is located on-board the helicopter, and the other camera is located on the ground. Unlike previous work, these two cameras are set to see each other. The pose estimation algorithm is compared in simulation to other methods and is shown to be less sensitive to errors on feature detection. In order to build an autonomous helicopter, two methods of control are studied: one using a series of mode-based, feedback linearizing controllers and the other using a backstepping-like control law. Various simulations demonstrate the implementation of these controllers. Finally, we present flight experiments where the proposed pose estimation algorithm and non-linear control techniques have been implemented on a remote-controlled helicopter. KEY WORDS—helicopter control, pose estimation, unmanned aerial vehicle, vision-based control", "title": "" }, { "docid": "841884bcdd50e7d459f4e1863ec08bd3", "text": "Organic light-emitting diodes (OLEDs) are competitive candidates for the next generation flat-panel displays and solid state lighting sources. Efficient blue-emitting materials have been one of the most important prerequisites to kick off the commercialization of OLEDs. This tutorial review focuses on the design of blue fluorescent emitters and their applications in OLEDs. At first, some typical blue fluorescent materials as dopants are briefly introduced. Then nondoped blue emitters of hydrocarbon compounds are presented. Finally, the nondoped blue emitters endowed with hole-, electron- and bipolar-transporting abilities are comprehensively reviewed. The key issues on suppressing close-packing, achieving pure blue chromaticity, improving thermal and morphological stabilities, manipulating charge transporting abilities, simplifying device structures and the applications in panchromatic OLEDs are discussed.", "title": "" }, { "docid": "9c8f54b087d90a2bcd9e3d7db1aabd02", "text": "The \"new Dark Silicon\" model benchmarks transistor technologies at the architectural level for multi-core processors.", "title": "" }, { "docid": "ac885eedad9c777e2980460d987c7cfb", "text": "BACKGROUND\nOne of the greatest problems for India is undernutrition among children. The country is still struggling with this problem. Malnutrition, the condition resulting from faulty nutrition, weakens the immune system and causes significant growth and cognitive delay. Growth assessment is the measurement that best defines the health and nutritional status of children, while also providing an indirect measurement of well-being for the entire population.\n\n\nMETHODS\nA cross-sectional study, in which we explored nutritional status in school-age slum children and analyze factors associated with malnutrition with the help of a pre-designed and pre-tested questionnaire, anthropometric measurements and clinical examination from December 2010 to April 2011 in urban slums of Bareilly, Uttar-Pradesh (UP), India.\n\n\nRESULT\nThe mean height and weight of boys and girls in the study group was lower than the CDC 2000 (Centers for Disease Control and Prevention) standards in all age groups. Regarding nutritional status, prevalence of stunting and underweight was highest in age group 11 yrs to 13 yrs whereas prevalence of wasting was highest in age group 5 yrs to 7 yrs. Except refractive errors all illnesses are more common among girls, but this gender difference is statistically significant only for anemia and rickets. The risk of malnutrition was significantly higher among children living in joint families, children whose mother's education was [less than or equal to] 6th standard and children with working mothers.\n\n\nCONCLUSIONS\nMost of the school-age slum children in our study had a poor nutritional status. Interventions such as skills-based nutrition education, fortification of food items, effective infection control, training of public healthcare workers and delivery of integrated programs are recommended.", "title": "" }, { "docid": "e9545a15187ab7638c9dd5fadf1e8f2e", "text": "Nowadays, artificial intelligence algorithms are used for targeted and personalized content distribution in the large scale as part of the intense competition for attention in the digital media environment. Unfortunately, targeted information dissemination may result in intellectual isolation and discrimination. Further, as demonstrated in recent political events in the US and EU, malicious bots and social media users can create and propagate targeted “fake news” content in different forms for political gains. From the other direction, fake news detection algorithms attempt to combat such problems by identifying misinformation and fraudulent user profiles. This paper reviews common news feed algorithms as well as methods for fake news detection, and we discuss how news feed algorithms could be misused to promote falsified content, affect news diversity, or impact credibility. We review how news feed algorithms and recommender engines can enable confirmation bias to isolate users to certain news sources and affecting the perception of reality. As a potential solution for increasing user awareness of how content is selected or sorted, we argue for the use of interpretable and explainable news feed algorithms. We discuss how improved user awareness and system transparency could mitigate unwanted outcomes of echo chambers and bubble filters in social media.", "title": "" }, { "docid": "c1cdc9bb29660e910ccead445bcc896d", "text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.", "title": "" }, { "docid": "c2a297417553cb46fd98353d8b8351ac", "text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.", "title": "" }, { "docid": "df2bc3dce076e3736a195384ae6c9902", "text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.", "title": "" }, { "docid": "7d1d34365f9f839c907d5a2fba74d8c1", "text": "This paper presents two circuit topologies of battery-less integrated boost oscillators suitable for kick-starting electronic systems in fully discharged states with ultra-low input voltages, in the context of energy harvesting applications based on thermoelectric generators, by coupling a piezoelectric transformer in a feedback loop. With respect to the prior work, the first presented solution is a double polarity circuit designed in a <inline-formula> <tex-math notation=\"LaTeX\">$0.18~\\mu \\text{m}$ </tex-math></inline-formula> CMOS technology able to boost ultra-low positive and negative voltages without using switching matrixes. The circuit exploits a CMOS inverter made up of low threshold transistors, and also includes a hysteretic voltage monitor consuming only ~15 nW to enable an external circuit. The minimum achieved positive and negative oscillation voltages are +15 and −8 mV, which to the best of the authors’ knowledge, are among the lowest start-up voltages achieved in literature up to now without using magnetic components. Moreover, the input impedance in the range of several <inline-formula> <tex-math notation=\"LaTeX\">$\\text{k}\\Omega $ </tex-math></inline-formula> makes the presented solution suitable also for high impedances sources, such as rectennas. The second presented circuit, designed in a <inline-formula> <tex-math notation=\"LaTeX\">$0.32~\\mu \\text{m}$ </tex-math></inline-formula> CMOS technology, exploits an input stage based on depletion-mode MOSFETs in a common source stage configuration and achieves a maximum step ratio of ~60.", "title": "" }, { "docid": "c0619cc366cd960a4fe9a1cbf7dddc4f", "text": "This review deals with alien species invasion in Southeast Asia, an important conservation and management concern in the region. I report on the current and potential future impacts of biological invasions on biodiversity in Southeast Asia. Current knowledge of the invasive species in Southeast Asia is mostly based on anecdotal observations. Nevertheless, I attempt to compile existing empirical evidence on the negative effects of the biological invaders found in the region. These impacts include displacement of native biota, modification of ecosystems, hybridization, environmental disturbance, and economic loss. Any effective counter-measure will need to involve a multi-national strategy, yet such measure is challenging due to a broad spectrum of political and economic development models among the Southeast Asian countries. An overview of the taxonomic structure of the invasive species in Southeast Asia shows that the invasive plant and fish are the most represented taxonomic groups in all countries. The current research effort in invasion ecology from Southeast Asia is not being up to international standard in comparison to other regions, and the absence of recent international journal articles on invasive plant species reveals the biases in biological invasion-related research. The lack of research capacity and financial support from governments, and the inability to disseminate scholarly data in international journals are the possible reasons for the dearth of research literature on biological invasions from the region. Finally, a forward-looking agenda for the region should include improving the quality and quantity of biological invasion research; adopting a tough approach to the illegal release of wildlife; and applying multi-national strategies that integrate data sharing, prioritization, public awareness, policy work, capacity building, conservation actions and surveillance.", "title": "" }, { "docid": "59ac485d21c761f523bcd9ba303032e6", "text": "Text categorization is the process of grouping documents into categories based on their contents. This process is important to make information retrieval easier, and it became more important due to the huge textual information available online. The main problem in text categorization is how to improve the classification accuracy. Although Arabic text categorization is a new promising field, there are a few researches in this field. This paper proposes a new method for Arabic text categorization using vector evaluation. The proposed method uses a categorized Arabic documents corpus, and then the weights of the tested document's words are calculated to determine the document keywords which will be compared with the keywords of the corpus categorizes to determine the tested document's best category.", "title": "" } ]
scidocsrr
7cc5fc136f813da8ac2b8b6df6741223
Satisfaction with male-to-female gender reassignment surgery.
[ { "docid": "9c118c312d8118e9a71fa0d17fa42b51", "text": "The Standards of Care (SOC) for the Health of Transsexual, Transgender, and Gender Nonconforming People is a publication of the World Professional Association for Transgender Health (WPATH). The overall goal of the SOC is to provide clinical guidance for health professionals to assist transsexual, transgender, and gender nonconforming people with safe and effective pathways to achieving lasting personal comfort with their gendered selves, in order to maximize their overall health, psychological well-being, and self-fulfillment. This assistance may include primary care, gynecologic and urologic care, reproductive options, voice and communication therapy, mental health services (e.g., assessment, counseling, psychotherapy), and hormonal and surgical treatments. The SOC are based on the best available science and expert professional consensus. Because most of the research and experience in this field comes from a North American and Western European perspective, adaptations of the SOC to other parts of the world are necessary. The SOC articulate standards of care while acknowledging the role of making informed choices and the value of harm reduction approaches. In addition, this version of the SOC recognizes that treatment for gender dysphoria i.e., discomfort or distress that is caused by a discrepancy between persons gender identity and that persons sex assigned at birth (and the associated gender role and/or primary and secondary sex characteristics) has become more individualized. Some individuals who present for care will have made significant self-directed progress towards gender role changes or other resolutions regarding their gender identity or gender dysphoria. Other individuals will require more intensive services. Health professionals can use the SOC to help patients consider the full range of health services open to them, in accordance with their clinical needs and goals for gender expression.", "title": "" }, { "docid": "6569b0630f9d9b9a5e3ca0849829f8cb", "text": "A long-term follow-up study of 55 transsexual patients (32 male-to-female and 23 female-to-male) post-sex reassignment surgery (SRS) was carried out to evaluate sexual and general health outcome. Relatively few and minor morbidities were observed in our group of patients, and they were mostly reversible with appropriate treatment. A trend toward more general health problems in male-to-females was seen, possibly explained by older age and smoking habits. Although all male-to-females, treated with estrogens continuously, had total testosterone levels within the normal female range because of estrogen effects on sex hormone binding globulin, only 32.1% reached normal free testosterone levels. After SRS, the transsexual person's expectations were met at an emotional and social level, but less so at the physical and sexual level even though a large number of transsexuals (80%) reported improvement of their sexuality. The female-to-males masturbated significantly more frequently than the male-to-females, and a trend to more sexual satisfaction, more sexual excitement, and more easily reaching orgasm was seen in the female-to-male group. The majority of participants reported a change in orgasmic feeling, toward more powerful and shorter for female-to-males and more intense, smoother, and longer in male-to-females. Over two-thirds of male-to-females reported the secretion of a vaginal fluid during sexual excitation, originating from the Cowper's glands, left in place during surgery. In female-to-males with erection prosthesis, sexual expectations were more realized (compared to those without), but pain during intercourse was more often reported.", "title": "" } ]
[ { "docid": "eb60f1aaa5980920a0932c61a536eb0d", "text": "Efficient distributed numerical word representation models (word embeddings) combined with modern machine learning algorithms have recently yielded considerable improvement on automatic document classification tasks. However, the effectiveness of such techniques has not been assessed for the hierarchical text classification (HTC) yet. This study investigates application of those models and algorithms on this specific problem by means of experimentation and analysis. We trained classification models with prominent machine learning algorithm implementations—fastText, XGBoost, SVM, and Keras’ CNN—and noticeable word embeddings generation methods—GloVe, word2vec, and fastText—with publicly available data and evaluated them with measures specifically appropriate for the hierarchical context. FastText achieved an lcaF1 of 0.893 on a single-labeled version of the RCV1 dataset. An analysis indicates that using word embeddings and its flavors is a very promising approach for HTC.", "title": "" }, { "docid": "c47c1e991cd090c7e92ae61419ca823b", "text": "In recent years many tone mapping operators (TMOs) have been presented in order to display high dynamic range images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The inverse of tone mapping, inverse tone mapping, expands a low dynamic range image (LDRI) into an HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. We propose a new framework that approximates a solution to this problem. Our framework uses importance sampling of light sources to find the areas considered to be of high luminance and subsequently applies density estimation to generate an expand map in order to extend the range in the high luminance areas using an inverse tone mapping operator. The majority of today’s media is stored in the low dynamic range. Inverse tone mapping operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image based lighting (IBL). Moreover, we show another application that benefits quick capture of HDRIs for use in IBL.", "title": "" }, { "docid": "0eafd376aadefa3a0e86121c4f02000a", "text": "The Gaussian process latent variable model (GP-LVM) is a powerful approach for probabilistic modelling of high dimensional data through dimensional reduction. In this paper we extend the GP-LVM through hierarchies. A hierarchical model (such as a tree) allows us to express conditional independencies in the data as well as the manifold structure. We first introduce Gaussian process hierarchies through a simple dynamical model, we then extend the approach to a more complex hierarchy which is applied to the visualisation of human motion data sets.", "title": "" }, { "docid": "724f3775b6fb63507c1a327367675a9d", "text": "Machine-learning methods are becoming increasingly popular for automated data analysis. However, standard methods do not scale up to massive scientific and business data sets without expensive hardware. This paper investigates a practical alternative for scaling up: the use of distributed processing to take advantage of the often dormant PCs and workstations available on local networks. Each workstation runs a common rule-learning program on a subset of the data. We first show that for commonly used rule evaluation criteria, a simple form of cooperation can guarantee that a rule will look good to the set of cooperating learners if and only if it would look good to a single learner operating with the entire data set. We then show how such a system can further capitalize on different perspectives by sharing learned knowledge for significant reduction in search effort. We demonstrate the power of the method by learning from a massive data set taken from the domain of cellular fraud detection. Finally, we provide an overview of other methods for scaling up machine learning.", "title": "" }, { "docid": "ef8d88d57858706ba269a8f3aaa989f3", "text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.", "title": "" }, { "docid": "31e0de8b5ca6321ef182b84c66e07ecd", "text": "Visual sentiment analysis is raising more and more attention with the increasing tendency to express emotions through images. While most existing works assign a single dominant emotion to each image, we address the sentiment ambiguity by label distribution learning (LDL), which is motivated by the fact that image usually evokes multiple emotions. Two new algorithms are developed based on conditional probability neural network (CPNN). First, we propose BCPNN which encodes image label into a binary representation to replace the signless integers used in CPNN, and employ it as a part of input for the neural network. Then, we train our ACPNN model by adding noises to ground truth label and augmenting affective distributions. Since current datasets are mostly annotated for single-label learning, we build two new datasets, one of which is relabeled on the popular Flickr dataset and the other is collected from Twitter. These datasets contain 20,745 images with multiple affective labels, which are over ten times larger than the existing ones. Experimental results show that the proposed methods outperform the state-of-theart works on our large-scale datasets and other publicly available benchmarks. Introduction In recent years, lots of attention has been paid to affective image classification (Jou et al. 2015; Joshi et al. 2011; Chen et al. 2015). Most of these works are conducted by psychological studies (Lang 1979; Lang, Bradley, and Cuthbert 1998), and focus on manual design of features and classifiers (You et al. 2015a). As defined as a singlelabel learning (SLL) problem which assigns a single emotional label to each image, previous works (You et al. 2016; Sun et al. 2016) have performed promising results. However, image sentiment may be the mixture of all components from different regions rather than a single representative emotion. Meanwhile, different people may have different emotional reactions to the same image, which is caused by a variety of elements like the different culture background and various recognitions from unique experiences (Peng et al. 2015). Furthermore, even a single viewer may have multiple reactions to one image. Figure 1 shows examples from a widely used dataset, i.e. Abstract Paintings Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Amusement Awe Contentment Excitement Anger Disgust Fear Sadness", "title": "" }, { "docid": "9817d5c5566166f1a391921a3e8744ed", "text": "Text classification often faces the problem of imbalanced training data. This is true in sentiment analysis and particularly prominent in emotion classification where multiple emotion categories are very likely to produce naturally skewed training data. Different sampling methods have been proposed to improve classification performance by reducing the imbalance ratio between training classes. However, data sparseness and the small disjunct problem remain obstacles in generating new samples for minority classes when the data are skewed and limited. Methods to produce meaningful samples for smaller classes rather than simple duplication are essential in overcoming this problem. In this paper, we present an oversampling method based on word embedding compositionality which produces meaningful balanced training data. We first use a large corpus to train a continuous skip-gram model to form a word embedding model maintaining the syntactic and semantic integrity of the word features. Then, a compositional algorithm based on recursive neural tensor networks is used to construct sentence vectors based on the word embedding model. Finally, we use the SMOTE algorithm as an oversampling method to generate samples for the minority classes and produce a fully balanced training set. Evaluation results on two quite different tasks show that the feature composition method and the oversampling method are both important in obtaining improved classification results. Our method effectively addresses the data imbalance issue and consequently achieves improved results for both sentiment and emotion classification.", "title": "" }, { "docid": "7b39f18d17218a6769d06757bf225f78", "text": "<italic>Editor’s note:</italic> Self-testing hardware has a long tradition as a complement to manufacturing testing based on test stimuli and response analysis. Today, it is a mature field and many complex SoCs have self-testing structures built-in (BIST). For self-aware SoCs this is a key technology, allowing the system to distinguish between correct and erroneous behavior. This survey article reviews the state of the art and shows how these techniques are to be generalized to facilitate self-awareness. —<italic>Axel Jantsch</italic>, <italic>TU Wien</italic> —<italic>Nikil Dutt, University of California at Irvine</italic>", "title": "" }, { "docid": "172216abbcb7acb25d5cdb8d65c2becf", "text": "In this paper, design of a planar wideband waveguide to microstrip transition for the 60 GHz frequency band is presented. The designed transition is fabricated using standard high frequency multilayer printed circuit board technology RO4003C. The waveguide to microstrip transition provides low production cost and allows for simple integration of the WR-15 rectangular waveguide without any modifications in the waveguide structure. Results of electromagnetic simulation and experimental investigation of the designed waveguide to microstrip transition are presented. The transmission bandwidth of the transition is equal to the full bandwidth of the WR-15 waveguide (50–75 GHz) for the −3 dB level of the insertion loss that was achieved by special modifications in the general aperture coupled transition structure. The transition loss is lower than 1 dB at the central frequency of 60 GHz.", "title": "" }, { "docid": "c1ddefd126c6d338c4cd9238e9067435", "text": "Tensor networks are efficient representations of high-dimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than 1% test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.", "title": "" }, { "docid": "094570518e943330ff8d9e1c714698cb", "text": "The concept of taking surface wave as an assistant role to obtain wide beams with main directions tilting to endfire is introduced in this paper. Planar Yagi-Uda-like antennas support TE0 surface wave propagation and exhibit endfire radiation patterns. However, when such antennas are printed on a thin grounded substrate, there is no propagation of TE mode and beams tilting to broadside. Benefiting from the advantage that the high impedance surface (HIS) could support TE and/or TM modes propagation, the idea of placing a planar Yagi-Uda-like antenna in close proximity to a HIS to excite unidirectional predominately TE surface wave in HIS is proposed. Power radiated by the feed antenna, in combination with power diffracted by the surface wave determines the total radiation pattern, resulting in the desired pattern. For verification, a compact, low-profile, pattern-reconfigurable parasitic array (having an interstrip spacing of 0.048 λ0) with an integrated DC biasing circuit was fabricated and tested. Good agreement was obtained between measured and simulated results.", "title": "" }, { "docid": "e50320cfddc32a918389fbf8707d599f", "text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.", "title": "" }, { "docid": "e793b233039c9cb105fa311fa08312cd", "text": "A generalized single-phase multilevel current source inverter (MCSI) topology with self-balancing current is proposed, which uses the duality transformation from the generalized multilevel voltage source inverter (MVSI) topology. The existing single-phase 8- and 6-switch 5-level current source inverters (CSIs) can be derived from this generalized MCSI topology. In the proposed topology, each intermediate DC-link current level can be balanced automatically without adding any external circuits; thus, a true multilevel structure is provided. Moreover, owing to the dual relationship, many research results relating to the operation, modulation, and control strategies of MVSIs can be applied directly to the MCSIs. Some simulation results are presented to verify the proposed MCSI topology.", "title": "" }, { "docid": "84b9601738c4df376b42d6f0f6190f53", "text": "Cloud Computing is one of the most important trend and newest area in the field of information technology in which resources (e.g. CPU and storage) can be leased and released by customers through the Internet in an on-demand basis. The adoption of Cloud Computing in Education and developing countries is real an opportunity. Although Cloud computing has gained popularity in Pakistan especially in education and industry, but its impact in Pakistan is still unexplored especially in Higher Education Department. Already published work investigated in respect of factors influencing on adoption of cloud computing but very few investigated said analysis in developing countries. The Higher Education Institutions (HEIs) of Punjab, Pakistan are still not focused to discover cloud adoption factors. In this study, we prepared cloud adoption model for Higher Education Institutions (HEIs) of Punjab, a survey was carried out from 900 students all over Punjab. The survey was designed based upon literature and after discussion and opinions of academicians. In this paper, 34 hypothesis were developed that affect the cloud computing adoption in HEIs and tested by using powerful statistical analysis tools i.e. SPSS and SmartPLS. Statistical findings shows that 84.44% of students voted in the favor of cloud computing adoption in their colleges, while 99% supported Reduce Cost as most important factor in cloud adoption.", "title": "" }, { "docid": "b17d89e7db1ca18fa5bcf2446f553a1b", "text": "Following the definition of developable surface in differential geometry, the flattenable mesh surface, a special type of piecewise- linear surface, inherits the good property of developable surface about having an isometric map from its 3D shape to a corresponding planar region. Different from the developable surfaces, a flattenable mesh surface is more flexible to model objects with complex shapes (e.g., cramped paper or warped leather with wrinkles). Modelling a flattenable mesh from a given input mesh surface can be completed under a constrained nonlinear optimization framework. In this paper, we reformulate the problem in terms of estimation error. Therefore, the shape of a flattenable mesh can be computed by the least-norm solutions faster. Moreover, the method for adding shape constraints to the modelling of flattenable mesh surfaces has been exploited. We show that the proposed method can compute flattenable mesh surfaces from input piecewise linear surfaces successfully and efficiently.", "title": "" }, { "docid": "a968a9842bb49f160503b24bff57cdd6", "text": "This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).", "title": "" }, { "docid": "a60128a5b5616added12f62e801671f0", "text": "Research shows that many organizations overlook needs and opportunities to strengthen ethics. Barriers can make it hard to see the need for stronger ethics and even harder to take effective action. These barriers include the organization's misleading use of language, misuse of an ethics code, culture of silence, strategies of justification, institutional betrayal, and ethical fallacies. Ethics placebos tend to take the place of steps to see, solve, and prevent problems. This article reviews relevant research and specific steps that create change.", "title": "" }, { "docid": "e5a2c2ef9d2cb6376b18c1e7232016b2", "text": "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results.", "title": "" }, { "docid": "48f261e94383c49fc63e9c4341236033", "text": "Due to very fast growth of information in the last few decades, getting precise information in real time is becoming increasingly difficult. Search engines such as Google and Yahoo are helping in finding the information but the information provided by them are in the form of documents which consumes a lot of time of the user. Question Answering Systems have emerged as a good alternative to search engines where they produce the desired information in a very precise way in the real time. This saves a lot of time for the user. There has been a lot of research in the field of English and some European language Question Answering Systems. However, Arabic Question Answering Systems could not match the pace due to some inherent difficulties with the language itself as well as due to lack of tools available to assist the researchers. Question classification is a very important module of Question Answering Systems. In this paper, we are presenting a method to accurately classify the Arabic questions in order to retrieve precise answers. The proposed method gives promising results.", "title": "" } ]
scidocsrr
d17089640c2b0821ad37fb07d77ec2f5
MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge
[ { "docid": "71a318111eb8ab480d3f6977f5ced403", "text": "Open Mind Common Sense is a knowledge acquisition system designed to acquire commonsense knowledge from the general public over the web. We describe and evaluate our first fielded system, which enabled the construction of a 400,000 assertion commonsense knowledge base. We then discuss how our second-generation system addresses weaknesses discovered in the first. The new system acquires facts, descriptions, and stories by allowing participants to construct and fill in natural language templates. It employs word-sense disambiguation and methods of clarifying entered knowledge, analogical inference to provide feedback, and allows participants to validate knowledge and in turn each other.", "title": "" }, { "docid": "1ebb333d5a72c649cd7d7986f5bf6975", "text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand­ ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under­ stand. The notion of plans is introduced to ac­ count for general knowledge about novel situa­ tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at­ tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel­ op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre­ sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po­ sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys­ tem or any working computational system the res­ triction of world knowledge need not critically concern him. Our feeling is that an effective characteri­ zation of knowledge can result in a real under­ standing system in the not too distant future. We expect that programs based on the theory we out­ …", "title": "" }, { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" } ]
[ { "docid": "8d90b9fbf7af1ea36f93f88e6ce11ba2", "text": "Given its serious implications for psychological and socio-emotional health, the prevention of problem gambling among adolescents is increasingly acknowledged as an area requiring attention. The theory of planned behavior (TPB) is a well-established model of behavior change that has been studied in the development and evaluation of primary preventive interventions aimed at modifying cognitions and behavior. However, the utility of the TPB has yet to be explored as a framework for the development of adolescent problem gambling prevention initiatives. This paper first examines the existing empirical literature addressing the effectiveness of school-based primary prevention programs for adolescent gambling. Given the limitations of existing programs, we then present a conceptual framework for the integration of the TPB in the development of effective problem gambling preventive interventions. The paper describes the TPB, demonstrates how the framework has been applied to gambling behavior, and reviews the strengths and limitations of the model for the design of primary prevention initiatives targeting adolescent risk and addictive behaviors, including adolescent gambling.", "title": "" }, { "docid": "d5f43b7405e08627b7f0930cc1ddd99e", "text": "Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or cost-effective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical information. We present our definition of CRDs, and describe a clone tracking system capable of producing CRDs from the output of different clone detection tools, notifying developers of modifications to clone regions, and supporting updates to the documented clone relationships. We evaluated the performance and usefulness of our approach across three clone detection tools and five subject systems, and the results indicate that CRDs are a practical and robust representation for tracking code clones in evolving software.", "title": "" }, { "docid": "8fc26adf38a835823f3ec590b43abbc9", "text": "This paper presents an application of the analytic hierarchy process (AHP) used to select the most appropriate tool to support knowledge management (KM). This method adopts a multi-criteria approach that can be used to analyse and compare KM tools in the software market. The method is based on pairwise comparisons between several factors that affect the selection of the most appropriate KM tool. An AHP model is formulated and applied to a real case of assisting decision-makers in a leading communications company in Hong Kong to evaluate a suitable KM tool. We believe that the application shown can be of use to managers and that, because of its ease of implementation, others can benefit from this approach. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "61ccc148d212d033d9e28898ef2898eb", "text": "Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, scan time limitations may prohibit acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can in turn suffer from the loss of structural details in synthesized images. Here, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss; and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to previous state-of-the-art methods. Our synthesis approach can help improve quality and versatility of multicontrast MRI exams without the need for prolonged or repeated examinations.", "title": "" }, { "docid": "45e1a424ad0807ce49cd4e755bdd9351", "text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.", "title": "" }, { "docid": "28d75588fdb4ff45929da124b001e8cc", "text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch", "title": "" }, { "docid": "e05fd90453c53b7cc41fa3b7c5303386", "text": "The Resource Description Framework (RDF) represents a main ingredient and data representation format for Linked Data and the Semantic Web. It supports a generic graph-based data model and data representation format for describing things, including their relationships with other things. As the size of RDF datasets is growing fast, RDF data management systems must be able to cope with growing amounts of data. Even though physically handling RDF data using a relational table is possible, querying a giant triple table becomes very expensive because of the multiple nested joins required for answering graph queries. In addition, the heterogeneity of RDF Data poses entirely new challenges to database systems. This article provides a comprehensive study of the state of the art in handling and querying RDF data. In particular, we focus on data storage techniques, indexing strategies, and query execution mechanisms. Moreover, we provide a classification of existing systems and approaches. We also provide an overview of the various benchmarking efforts in this context and discuss some of the open problems in this domain.", "title": "" }, { "docid": "89438b3b2a78c54a44236b720940c8f2", "text": "InProcess-Aware Information Systems, business processes are often modeled in an explicit way. Roughly speaking, the available business processmodeling languages can bedivided into twogroups. Languages from the first group are preferred by academic people but shunned by business people, and include Petri nets and process algebras. These academic languages have a proper formal semantics, which allows the corresponding academic models to be verified in a formal way. Languages from the second group are preferred by business people but disliked by academic people, and include BPEL, BPMN, andEPCs. These business languages often lack any proper semantics, which often leads to debates on how to interpret certain business models. Nevertheless, business models are used in practice, whereas academic models are hardly used. To be able to use, for example, the abundance of Petri net verification techniques on business models, we need to be able to transform these models to Petri nets. In this paper, we investigate anumberofPetri net transformations that already exist.For every transformation, we investigate the transformation itself, the constructs in the business models that are problematic for the transformation and the main applications for the transformation.", "title": "" }, { "docid": "5c46291b9a3cab0fb2f9501fff6f6a36", "text": "We discuss the fundamental limits of computing using a new paradigm for quantum computation, cellular automata composed of arrays of Coulombically coupled quantum dot molecules, which we term quantum cellular automata (QCA). Any logical or arithmetic operation can be performed in this scheme. QCA’s provide a valuable concrete example of quantum computation in which a number of fundamental issues come to light. We examine the physics of the computing process in this paradigm. We show to what extent thermodynamic considerations impose limits on the ultimate size of individual QCA arrays. Adiabatic operation of the QCA is examined and the implications for dissipationless computing are explored.", "title": "" }, { "docid": "ce0d288ea4ee56aca9f986fbca138c81", "text": "Chronic exposure to stress hormones, whether it occurs during the prenatal period, infancy, childhood, adolescence, adulthood or aging, has an impact on brain structures involved in cognition and mental health. However, the specific effects on the brain, behaviour and cognition emerge as a function of the timing and the duration of the exposure, and some also depend on the interaction between gene effects and previous exposure to environmental adversity. Advances in animal and human studies have made it possible to synthesize these findings, and in this Review a model is developed to explain why different disorders emerge in individuals exposed to stress at different times in their lives.", "title": "" }, { "docid": "a1623a10e06537a038ce3eaa1cfbeed7", "text": "We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr’s protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto’s protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.", "title": "" }, { "docid": "d3214d24911a5e42855fd1a53516d30b", "text": "This paper extends the face detection framework proposed by Viola and Jones 2001 to handle profile views and rotated faces. As in the work of Rowley et al 1998. and Schneiderman et al. 2000, we build different detectors for different views of the face. A decision tree is then trained to determine the viewpoint class (such as right profile or rotated 60 degrees) for a given window of the image being examined. This is similar to the approach of Rowley et al. 1998. The appropriate detector for that viewpoint can then be run instead of running all detectors on all windows. This technique yields good results and maintains the speed advantage of the Viola-Jones detector. Shown as a demo at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 18, 2003 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Publication History:– 1. First printing, TR2003-96, July 2003 Fast Multi-view Face Detection Michael J. Jones Paul Viola mjones@merl.com viola@microsoft.com Mitsubishi Electric Research Laboratory Microsoft Research 201 Broadway One Microsoft Way Cambridge, MA 02139 Redmond, WA 98052", "title": "" }, { "docid": "fec16344f8b726b9d232423424c101d3", "text": "A triboelectric separator manufactured by PlasSep, Ltd., Canada was evaluated at MBA Polymers, Inc. as part of a project sponsored by the American Plastics Council (APC) to explore the potential of triboelectric methods for separating commingled plastics from end-oflife durables. The separator works on a very simple principle: that dissimilar materials will transfer electrical charge to one another when rubbed together, the resulting surface charge differences can then be used to separate these dissimilar materials from one another in an electric field. Various commingled plastics were tested under controlled operating conditions. The feed materials tested include commingled plastics derived from electronic shredder residue (ESR), automobile shredder residue (ASR), refrigerator liners, and water bottle plastics. The separation of ESR ABS and HIPS, and water bottle PC and PVC were very promising. However, this device did not efficiently separate many plastic mixtures, such as rubber and plastics; nylon and acetal; and PE and PP from ASR. All tests were carried out based on the standard operating conditions determined for ESR ABS and HIPS. There is the potential to improve the separation performance for many of the feed materials by individually optimizing their operating conditions. Cursory economics shows that the operation cost is very dependent upon assumed throughput, separation efficiency and requisite purity. Unit operation cost could range from $0.03/lb. to $0.05/lb. at capacities of 2000 lb./hr. and 1000 lb./hr.", "title": "" }, { "docid": "06518637c2b44779da3479854fdbb84d", "text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.", "title": "" }, { "docid": "2579cb11b9d451d6017ebb642d6a35cb", "text": "The presence of bots has been felt in many aspects of social media. Twitter, one example of social media, has especially felt the impact, with bots accounting for a large portion of its users. These bots have been used for malicious tasks such as spreading false information about political candidates and inflating the perceived popularity of celebrities. Furthermore, these bots can change the results of common analyses performed on social media. It is important that researchers and practitioners have tools in their arsenal to remove them. Approaches exist to remove bots, however they focus on precision to evaluate their model at the cost of recall. This means that while these approaches are almost always correct in the bots they delete, they ultimately delete very few, thus many bots remain. We propose a model which increases the recall in detecting bots, allowing a researcher to delete more bots. We evaluate our model on two real-world social media datasets and show that our detection algorithm removes more bots from a dataset than current approaches.", "title": "" }, { "docid": "fc3f6dc6d2a66b6f692f76a02235e9d7", "text": "This paper presents modelling of ball and plate systems based on first principles by considering balance of forces and torques. A non-linear model is derived considering the dynamics of motors, gears, ball and plate. The non-linear model is linearized near the operating region to obtain a standard state space model. This linear model is used for discrete optimal control of the ball and plate system — the trajectory of the ball is controlled by control voltages to the motor.", "title": "" }, { "docid": "660e6273304e16c8c4bc5a76e738c3b6", "text": "BACKGROUND\n\"Fitspiration\" (also known as \"fitspo\") aims to inspire individuals to exercise and be healthy, but emerging research indicates exposure can negatively impact female body image. Fitspiration is frequently accessed on social media; however, it is currently unclear the degree to which messages about body image and exercise differ by gender of the subject.\n\n\nOBJECTIVE\nThe aim of our study was to conduct a content analysis to identify the characteristics of fitspiration content posted across social media and whether this differs according to subject gender.\n\n\nMETHODS\nContent tagged with #fitspo across Instagram, Facebook, Twitter, and Tumblr was extracted over a composite 30-minute period. All posts were analyzed by 2 independent coders according to a codebook.\n\n\nRESULTS\nOf the 415/476 (87.2%) relevant posts extracted, most posts were on Instagram (360/415, 86.8%). Most posts (308/415, 74.2%) related thematically to exercise, and 81/415 (19.6%) related thematically to food. In total, 151 (36.4%) posts depicted only female subjects and 114/415 (27.5%) depicted only male subjects. Female subjects were typically thin but toned; male subjects were often muscular or hypermuscular. Within the images, female subjects were significantly more likely to be aged under 25 years (P<.001) than the male subjects, to have their full body visible (P=.001), and to have their buttocks emphasized (P<.001). Male subjects were more likely to have their face visible in the post (P=.005) than the female subjects. Female subjects were more likely to be sexualized than the male subjects (P=.002).\n\n\nCONCLUSIONS\nFemale #fitspo subjects typically adhered to the thin or athletic ideal, and male subjects typically adhered to the muscular ideal. Future research and interventional efforts should consider the potential objectifying messages in fitspiration, as it relates to both female and male body image.", "title": "" }, { "docid": "4b1a02a1921a33a8c2f4d01670174f77", "text": "In this paper we propose an approach for articulated tracking of multiple people in unconstrained videos. Our starting point is a model that resembles existing architectures for single-frame pose estimation but is several orders of magnitude faster. We achieve this in two ways: (1) by simplifying and sparsifying the body-part relationship graph and leveraging recent methods for faster inference, and (2) by offloading a substantial share of computation onto a feed-forward convolutional architecture that is able to detect and associate body joints of the same person even in clutter. We use this model to generate proposals for body joint locations and formulate articulated tracking as spatio-temporal grouping of such proposals. This allows to jointly solve the association problem for all people in the scene by propagating evidence from strong detections through time and enforcing constraints that each proposal can be assigned to one person only. We report results on a public MPII Human Pose benchmark and on a new dataset of videos with multiple people. We demonstrate that our model achieves state-of-the-art results while using only a fraction of time and is able to leverage temporal information to improve state-of-the-art for crowded scenes1.", "title": "" }, { "docid": "577841609abb10a978ed54429f057def", "text": "Smart environments integrates various types of technologies, including cloud computing, fog computing, and the IoT paradigm. In such environments, it is essential to organize and manage efficiently the broad and complex set of heterogeneous resources. For this reason, resources classification and categorization becomes a vital issue in the control system. In this paper we make an exhaustive literature survey about the various computing systems and architectures which defines any type of ontology in the context of smart environments, considering both, authors that explicitly propose resources categorization and authors that implicitly propose some resources classification as part of their system architecture. As part of this research survey, we have built a table that summarizes all research works considered, and which provides a compact and graphical snapshot of the current classification trends. The goal and primary motivation of this literature survey has been to understand the current state of the art and identify the gaps between the different computing paradigms involved in smart environment scenarios. As a result, we have found that it is essential to consider together several computing paradigms and technologies, and that there is not, yet, any research work that integrates a merged resources classification, taxonomy or ontology required in such heterogeneous scenarios.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
d15550b13a3053be636c6a2b7608dcc1
Elicitation for Preferences Single Peaked on Trees
[ { "docid": "edce0a0d0b594e21271a4116e223f84b", "text": "Eliciting the preferences of a set of agents over a set of alternatives is a problem of fundamental importance in social choice theory. Prior work on this problem has studied the query complexity of preference elicitation for the unrestricted domain and for the domain of single peaked preferences. In this paper, we consider the domain of single crossing preference profiles and study the query complexity of preference elicitation under various settings. We consider two distinct situations: when an ordering of the voters with respect to which the profile is single crossing is known versus when it is unknown. We also consider different access models: when the votes can be accessed at random, as opposed to when they are coming in a pre-defined sequence. In the sequential access model, we distinguish two cases when the ordering is known: the first is that sequence in which the votes appear is also a single-crossing order, versus when it is not. The main contribution of our work is to provide polynomial time algorithms with low query complexity for preference elicitation in all the above six cases. Further, we show that the query complexities of our algorithms are optimal up to constant factors for all but one of the above six cases. We then present preference elicitation algorithms for profiles which are close to being single crossing under various notions of closeness, for example, single crossing width, minimum number of candidates|voters whose deletion makes a profile single crossing.", "title": "" } ]
[ { "docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4", "text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.", "title": "" }, { "docid": "f80430c36094020991f167aeb04f21e0", "text": "Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.", "title": "" }, { "docid": "d1796cd063e0d1ea03462d2002c4dae5", "text": "This paper describes the experimental characterization of MOS bipolar pseudo-resistors for a general purpose technology. Very-high resistance values can be obtained in small footprint layouts, allowing the development of high-pass filters with RC constants over 1 second. The pseudo-resistor presents two different behavior regions, and as described in this work, in bio-amplifiers applications, important functions are assigned to each of these regions. 0.13 μm 8HP technology from GlobalFoundries was chosen as the target technology for the prototypes, because of its versatility. Due to the very-low current of pseudo-resistors, a circuit for indirect resistance measurement was proposed and applied. The fabricated devices presented resistances over 1 teraohm and preserved both the linear and the exponential operation regions, proving that they are well suited for bio-amplifier applications.", "title": "" }, { "docid": "22947cc8f2b1be70df10cb6adf210fc5", "text": "GANS are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the feature-matching GAN of Salimans et al. (2016), we achieve state-of-the-art results for GAN-based semisupervised learning on the CIFAR-10 dataset, with a method that is significantly easier to implement than competing methods.", "title": "" }, { "docid": "910fdcf9e9af05b5d1cb70a9c88e4143", "text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.", "title": "" }, { "docid": "1c1aac16770866e6cee914440ccf7eeb", "text": "In this paper, we propose a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN) through domain-collaborative and domain-adversarial training of neural networks. We add several domain classifiers on multiple CNN feature extraction blocks1, in which each domain classifier is connected to the hidden representations from one block and one loss function is defined based on the hidden presentation and the domain labels (e.g., source and target). We design a new loss function by integrating the losses from all blocks in order to learn domain informative representations from lower blocks through collaborative learning and learn domain uninformative representations from higher blocks through adversarial learning. We further extend our CAN method as Incremental CAN (iCAN), in which we iteratively select a set of pseudo-labelled target samples based on the image classifier and the last domain classifier from the previous training epoch and re-train our CAN model by using the enlarged training set. Comprehensive experiments on two benchmark datasets Office and ImageCLEF-DA clearly demonstrate the effectiveness of our newly proposed approaches CAN and iCAN for unsupervised domain adaptation.", "title": "" }, { "docid": "23641b410a3d1ae3f270bb19988ad4f5", "text": "Brain Computer Interface systems rely on lengthy training phases that can last up to months due to the inherent variability in brainwave activity between users. We propose a BCI architecture based on the co-learning between the user and the system through different feedback strategies. Thus, we achieve an operational BCI within minutes. We apply our system to the piloting of an AR.Drone 2.0 quadricopter. We show that our architecture provides better task performance than traditional BCI paradigms within a shorter time frame. We further demonstrate the enthusiasm of users towards our BCI-based interaction modality and how they find it much more enjoyable than traditional interaction modalities.", "title": "" }, { "docid": "b7b1153067a784a681f2c6d0105acb2a", "text": "Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects (age 76-94 years; 65 females). Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with a young adult population, and previously published adult connectomic data. More importantly, the architectural features of hub connections reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, whilst rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimer's disease (AD) and frontotemporal dementia (FTD).", "title": "" }, { "docid": "29e030bb4d8547d7615b8e3d17ec843d", "text": "This Paper examines the enforcement of occupational safety and health (OSH) regulations; it validates the state of enforcement of OSH regulations by extracting the salient issues that influence enforcement of OSH regulations in Nigeria. It’s the duty of the Federal Ministry of Labour and Productivity (Inspectorate Division) to enforce the Factories Act of 1990, while the Labour, Safety, Health and Welfare Bill of 2012 empowers the National Council for Occupational Safety and Health of Nigeria to administer the proceeding regulations on its behalf. Sadly enough, the impact of the enforcement authority is ineffective, as the key stakeholders pay less attention to OSH regulations; thus, rendering the OSH scheme dysfunctional and unenforceable, at the same time impeding OSH development. For optimum OSH in Nigeria, maximum enforcement and compliance with the regulations must be in place. This paper, which is based on conceptual analysis, reviews literature gathered through desk literature search. It identified issues to OSH enforcement such as: political influence, bribery and corruption, insecurity, lack of governmental commitment, inadequate legislation inter alia. While recommending ways to improve the enforcement of OSH regulations, it states that self-regulatory style of enforcing OSH regulations should be adopted by organisations. It also recommends that more OSH inspectors be recruited; local government authorities empowered to facilitate the enforcement of OSH regulations. Moreover, the study encourages organisations to champion OSH enforcement, as it is beneficial to them; it concludes that the burden of OSH improvement in Nigeria is on the government, educational authorities, organisations and trade unions.", "title": "" }, { "docid": "23ce33bb6ffbbd8f598cdcd0498d7828", "text": "Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency.", "title": "" }, { "docid": "c080120ead059b7506eeb74a40d6c5a6", "text": "The management of airport surface operations to provide shared situational awareness and to control taxi times through the use of ‘virtual queues’ has become an important component of Air Traffic Management (ATM) research and development in both Europe and the United States. Airport Collaborative Decision Making (CDM) has been implemented at a number of airports in Europe, and multiple departure metering concepts have now been tested in the US National Airspace System (NAS). This paper provides a review and comparison of the different airport surface departure management concepts, and describes one such concept in detail that has been evaluated operationally in the field in the US, the Collaborative Departure Queue Management (CDQM) concept. CDQM has been developed and evaluated by the Federal Aviation Administration (FAA) under the Surface Trajectory Based Operations (STBO) project. This paper provides a description of the operational field evaluation of CDQM that was conducted in Memphis, Tennessee, during 2009 and 2010. An analysis of the effectiveness, accuracy and benefit of CDQM in managing departure operations during the field evaluation is presented. CDQM was found to provide reduced taxi times, and resultant reduced fuel usage and emissions, while maintaining full use of departure capacity. Keywords-airport surface traffic management; departure queue management; scheduling algorithms; collaborative decision making; equitable rationing of capacity", "title": "" }, { "docid": "771e63f84bd65462708aba9f16405a39", "text": "Location-based social networks (LBSNs) offer researchers rich data to study people's online activities and mobility patterns. One important application of such studies is to provide personalized point-of-interest (POI) recommendations to enhance user experience in LBSNs. Previous solutions directly predict users' preference on locations but fail to provide insights about users' preference transitions among locations. In this work, we propose a novel category-aware POI recommendation model, which exploits the transition patterns of users' preference over location categories to improve location recommendation accuracy. Our approach consists of two stages: (1) preference transition (over location categories) prediction, and (2) category-aware POI recommendation. Matrix factorization is employed to predict a user's preference transitions over categories and then her preference on locations in the corresponding categories. Real data based experiments demonstrate that our approach outperforms the state-of-the-art POI recommendation models by at least 39.75% in terms of recall.", "title": "" }, { "docid": "1b78650b979b0043eeb3e7478a263846", "text": "Our solutions was launched using a want to function as a full on-line digital local library that gives use of many PDF guide catalog. You may find many different types of e-guide as well as other literatures from my papers data bank. Specific popular topics that spread out on our catalog are famous books, answer key, assessment test questions and answer, guideline paper, training guideline, quiz test, consumer guide, consumer guidance, service instructions, restoration handbook, and so forth.", "title": "" }, { "docid": "ac0707f876589125d84a51dc966b3d33", "text": "Facebook is the world's largest social network, connecting over 800 million users worldwide. The type of phenomenal growth experienced by Facebook in a short time is rare for any technology company. As the Facebook user base approaches the 1 billion mark, a number of exciting opportunities await the world of social networking and the future of the web. We present a case study of what it is like to design for a billion users at Facebook from the perspective of designers, engineers, managers, user experience researchers, and other stakeholders at the company. Our case study illustrates various complexities and tradeoffs in design through a Human-Computer Interaction (HCI) lens and highlights implications for tackling the challenges through research and practice.", "title": "" }, { "docid": "ceaa36ef5884f7fadd111744dc85f0c1", "text": "One-shot learning – the human ability to learn a new concept from just one or a few examples – poses a challenge to traditional learning algorithms, although approaches based on Hierarchical Bayesian models and compositional representations have been making headway. This paper investigates how children and adults readily learn the spoken form of new words from one example – recognizing arbitrary instances of a novel phonological sequence, and excluding non-instances, regardless of speaker identity and acoustic variability. This is an essential step on the way to learning a word’s meaning and learning to use it, and we develop a Hierarchical Bayesian acoustic model that can learn spoken words from one example, utilizing compositions of phoneme-like units that are the product of unsupervised learning. We compare people and computational models on one-shot classification and generation tasks with novel Japanese words, finding that the learned units play an important role in achieving good performance.", "title": "" }, { "docid": "bb2504b2275a20010c0d5f9050173d40", "text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.", "title": "" }, { "docid": "1e0a4246c81896c3fd5175bc10065460", "text": "Automatic modulation recognition (AMR) is becoming more important because it is usable in advanced general-purpose communication such as, cognitive radio, as well as, specific applications. Therefore, developments should be made for widely used modulation types; machine learning techniques should be employed for this problem. In this study, we have evaluated performances of different machine learning algorithms for AMR. Specifically, we have evaluated performances of artificial neural networks, support vector machines, random forest tree, k-nearest neighbor, Hoeffding tree, logistic regression, Naive Bayes and Gradient Boosted Regression Tree methods to obtain comparative results. The most preferred feature extraction methods in the literature have been used for a set of modulation types for general-purpose communication. We have considered AWGN and Rayleigh channel models evaluating their recognition performance as well as having made recognition performance improvement over Rayleigh for low SNR values using the reception diversity technique. We have compared their recognition performance in the accuracy metric, and plotted them as well. Furthermore, we have served confusion matrices for some particular experiments.", "title": "" }, { "docid": "e5e4349bb677bb128dcf1385c34cdf41", "text": "The occurrence of eight phosphorus flame retardants (PFRs) was investigated in 53 composite food samples from 12 food categories, collected in 2015 for a Swedish food market basket study. 2-ethylhexyl diphenyl phosphate (EHDPHP), detected in most food categories, had the highest median concentrations (9 ng/g ww, pastries). It was followed by triphenyl phosphate (TPHP) (2.6 ng/g ww, fats/oils), tris(1,3-dichloro-2-propyl) phosphate (TDCIPP) (1.0 ng/g ww, fats/oils), tris(2-chloroethyl) phosphate (TCEP) (1.0 ng/g ww, fats/oils), and tris(1-chloro-2-propyl) phosphate (TCIPP) (0.80 ng/g ww, pastries). Tris(2-ethylhexyl) phosphate (TEHP), tri-n-butyl phosphate (TNBP), and tris(2-butoxyethyl) phosphate (TBOEP) were not detected in the analyzed food samples. The major contributor to the total dietary intake was EHDPHP (57%), and the food categories which contributed the most to the total intake of PFRs were processed food, such as cereals (26%), pastries (10%), sugar/sweets (11%), and beverages (17%). The daily per capita intake of PFRs (TCEP, TPHP, EHDPHP, TDCIPP, TCIPP) from food ranged from 406 to 3266 ng/day (or 6-49 ng/kg bw/day), lower than the health-based reference doses. This is the first study reporting PFR intakes from other food categories than fish (here accounting for 3%). Our results suggest that the estimated human dietary exposure to PFRs may be equally important to the ingestion of dust.", "title": "" }, { "docid": "8da50eee8aaebe575eeaceae49c9fb37", "text": "In this paper, we propose a set of language resources for building Turkish language processing applications. Specifically, we present a finite-state implementation of a morphological parser, an averaged perceptron-based morphological disambiguator, and compilation of a web corpus. Turkish is an agglutinative language with a highly productive inflectional and derivational morphology. We present an implementation of a morphological parser based on two-level morphology. This parser is one of the most complete parsers for Turkish and it runs independent of any other external system such as PCKIMMO in contrast to existing parsers. Due to complex phonology and morphology of Turkish, parsing introduces some ambiguous parses. We developed a morphological disambiguator with accuracy of about 98% using averaged perceptron algorithm. We also present our efforts to build a Turkish web corpus of about 423 million words.", "title": "" } ]
scidocsrr
22a3a22e8ffb4d57e6a649a6fef06d4e
Structural detection of android malware using embedded call graphs
[ { "docid": "413d6b01d62148fa86627f7cede5c53a", "text": "Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, enabling the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Clustering experiments are conducted on a collection of real malware samples, and the results are evaluated against manual classifications provided by human malware analysts. Experiments show that it is indeed possible to accurately detect malware families via call graph clustering. We anticipate that in the future, call graphs can be used to analyse the emergence of new malware families, and ultimately to automate implementation of generic detection schemes.", "title": "" } ]
[ { "docid": "74f017db6e98b068b29698886caec368", "text": "Social networks have become an additional marketing channel that could be integrated with the traditional ones as a part of the marketing mix. The change in the dynamics of the marketing interchange between companies and consumers as introduced by social networks has placed a focus on the non-transactional customer behavior. In this new marketing era, the terms engagement and participation became the central non-transactional constructs, used to describe the nature of participants’ specific interactions and/or interactive experiences. These changes imposed challenges to the traditional one-way marketing, resulting in companies experimenting with many different approaches, thus shaping a successful social media approach based on the trial-and-error experiences. To provide insights to practitioners willing to utilize social networks for marketing purposes, our study analyzes the influencing factors in terms of characteristics of the content communicated by the company, such as media type, content type, posting day and time, over the level of online customer engagement measured by number of likes, comments and shares, and interaction duration for the domain of a Facebook brand page. Our results show that there is a different effect of the analyzed factors over individual engagement measures. We discuss the implications of our findings for social media marketing.", "title": "" }, { "docid": "3c47a26bfe8221828da80a32b993fbc3", "text": "Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.", "title": "" }, { "docid": "374b3e207a868c388f0b814c457f6871", "text": "BACKGROUND\nQuadriceps strengthening exercises are part of the treatment of patellofemoral pain (PFP), but the heavy resistance exercises may aggravate knee pain. Blood flow restriction (BFR) training may provide a low-load quadriceps strengthening method to treat PFP.\n\n\nMETHODS\nSeventy-nine participants were randomly allocated to a standardised quadriceps strengthening (standard) or low-load BFR. Both groups performed 8 weeks of leg press and leg extension, the standard group at 70% of 1 repetition maximum (1RM) and the BFR group at 30% of 1RM. Interventions were compared using repeated-measures analysis of variance for Kujala Patellofemoral Score, Visual Analogue Scale for 'worst pain' and 'pain with daily activity', isometric knee extensor torque (Newton metre) and quadriceps muscle thickness (cm). Subgroup analyses were performed on those participants with painful resisted knee extension at 60°.\n\n\nRESULTS\nSixty-nine participants (87%) completed the study (standard, n=34; BFR, n=35). The BFR group had a 93% greater reduction in pain with activities of daily living (p=0.02) than the standard group. Participants with painful resisted knee extension (n=39) had greater increases in knee extensor torque with BFR than standard (p<0.01). No between-group differences were found for change in Kujala Patellofemoral Score (p=0.31), worst pain (p=0.24), knee extensor torque (p=0.07) or quadriceps thickness (p=0.2). No difference was found between interventions at 6 months.\n\n\nCONCLUSION\nCompared with standard quadriceps strengthening, low load with BFR produced greater reduction in pain with daily living at 8 weeks in people with PFP. Improvements were similar between groups in worst pain and Kujala score. The subgroup with painful resisted knee extension had larger improvements in quadriceps strength from BFR.\n\n\nTRIAL REGISTRATION NUMBER\n12614001164684.", "title": "" }, { "docid": "eff7d3775d12687c81ae91b130c7c562", "text": "We propose a novel approach for sparse probabilistic principal component analysis, that combines a low rank representation for the latent factors and loadings with a novel sparse variational inference approach for estimating distributions of latent variables subject to sparse support constraints. Inference and parameter estimation for the resulting model is achieved via expectation maximization with a novel variational inference method for the E-step that induces sparsity. We show that this inference problem can be reduced to discrete optimal support selection. The discrete optimization is submodular, hence, greedy selection is guaranteed to achieve 1-1/e fraction of the optimal. Empirical studies indicate effectiveness of the proposed approach for the recovery of a parsimonious decomposition as compared to established baseline methods. We also evaluate our method against state-of-the-art methods on high dimensional fMRI data, and show that the method performs as well as or better than other methods.", "title": "" }, { "docid": "f9f20dcb568beccd50a725123a126914", "text": "In this paper we present two ontologies, i.e., BiRO and C4O, that allow users to describe bibliographic references in an accurate way, and we introduce REnhancer, a proof-of-concept implementation of a converter that takes as input a raw-text list of references and produces an RDF dataset according to the BiRO and C4O ontologies.", "title": "" }, { "docid": "2c222bb815ca26240e72072e5c9a1d42", "text": "Novelty search is a state-of-the-art evolutionary approach that promotes behavioural novelty instead of pursuing a static objective. Along with a large number of successful applications, many different variants of novelty search have been proposed. It is still unclear, however, how some key parameters and algorithmic components influence the evolutionary dynamics and performance of novelty search. In this paper, we conduct a comprehensive empirical study focused on novelty search's algorithmic components. We study the \"k\" parameter -- the number of nearest neighbours used in the computation of novelty scores; the use and function of an archive; how to combine novelty search with fitness-based evolution; and how to configure the mutation rate of the underlying evolutionary algorithm. Our study is conducted in a simulated maze navigation task. Our results show that the configuration of novelty search can have a significant impact on performance and behaviour space exploration. We conclude with a number of guidelines for the implementation and configuration of novelty search, which should help future practitioners to apply novelty search more effectively.", "title": "" }, { "docid": "587f6e73ca6653860cda66238d2ba146", "text": "Cable-suspended robots are structurally similar to parallel actuated robots but with the fundamental difference that cables can only pull the end-effector but not push it. From a scientific point of view, this feature makes feedback control of cable-suspended robots more challenging than their counterpart parallel-actuated robots. In the case with redundant cables, feedback control laws can be designed to make all tensions positive while attaining desired control performance. This paper presents approaches to design positive tension controllers for cable suspended robots with redundant cables. Their effectiveness is demonstrated through simulations and experiments on a three degree-of-freedom cable suspended robots.", "title": "" }, { "docid": "95296a02831a1f8fb50288503bea75ad", "text": "The Residual Network (ResNet), proposed in He et al. (2015a), utilized shortcut connections to significantly reduce the difficulty of training, which resulted in great performance boosts in terms of both training and generalization error. It was empirically observed in He et al. (2015a) that stacking more layers of residual blocks with shortcut 2 results in smaller training error, while it is not true for shortcut of length 1 or 3. We provide a theoretical explanation for the uniqueness of shortcut 2. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condition number of the Hessian of the loss function at the zero initial point is depth-invariant, which makes training very deep models no more difficult than shallow ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary point initially, from which the optimization algorithm is hard to escape. The shortcut 1, however, is essentially equivalent to no shortcuts, which has a condition number exploding to infinity as the number of layers grows. We further argue that as the number of layers tends to infinity, it suffices to only look at the loss function at the zero initial point. Extensive experiments are provided accompanying our theoretical results. We show that initializing the network to small weights with shortcut 2 achieves significantly better results than random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of deeper depth, from various perspectives ranging from final loss, learning dynamics and stability, to the behavior of the Hessian along the learning process.", "title": "" }, { "docid": "d8634beb04329e72e462df98d31b2003", "text": "Link prediction is a key technique in many applications in social networks, where potential links between entities need to be predicted. Conventional link prediction techniques deal with either homogeneous entities, e.g., people to people, item to item links, or non-reciprocal relationships, e.g., people to item links. However, a challenging problem in link prediction is that of heterogeneous and reciprocal link prediction, such as accurate prediction of matches on an online dating site, jobs or workers on employment websites, where the links are reciprocally determined by both entities that heterogeneously belong to disjoint groups. The nature and causes of interactions in these domains makes heterogeneous and reciprocal link prediction significantly different from the conventional version of the problem. In this work, we address these issues by proposing a novel learnable framework called ReHeLP, which learns heterogeneous and reciprocal knowledge from collaborative information and demonstrate its impact on link prediction. Evaluation on a large commercial online dating dataset shows the success of the proposed method and its promise for link prediction.", "title": "" }, { "docid": "7eb0184ada44ab451e412ec310db862a", "text": "Extremely high correlations between repeated judgments of visual appeal of homepages shown for 50 milliseconds have been interpreted as evidence for a mere exposure effect [Lindgaard et al. 2006]. Continuing that work, the present research had two objectives. First, it investigated the relationship between judgments differing in cognitive demands. Second, it began to identify specific visual attributes that appear to contribute to different judgments. Three experiments are reported. All used the stimuli and viewing time as before. Using a paradigm known to disrupt processing beyond the stimulus offset, Experiment 1 was designed to ensure that the previous findings could not be attributed to such continued processing. Adopting a within-subject design, Experiment 2 investigated the extent to which judgments differing in cognitive demands (visual appeal, perceived usability, trustworthiness) may be driven by the visual characteristics of a Web page. It also enabled analyses of visual attributes that contributed most to the different judgments. Experiment 3 replicated Experiment 2 but using a between-subject design to ensure that no practice effect could occur. The results suggest that all three types of judgments are largely driven by visual appeal, but that cognitively demanding judgments are processed in a qualitatively different manner than visual appeal, and that they rely on somewhat different visual attributes. A model accounting for the results is provided.", "title": "" }, { "docid": "3159856141b06a78f0d60ae8e118a251", "text": "This paper introduces Associative Compression Networks (ACNs), a new framework for variational autoencoding with neural networks. The system differs from existing variational autoencoders in that the prior distribution used to model each code is conditioned on a similar code from the dataset. In compression terms this equates to sequentially transmitting the data using an ordering determined by proximity in latent space. As the prior need only account for local, rather than global variations in the latent space, the coding cost is greatly reduced, leading to rich, informative codes, even when autoregressive decoders are used. Experimental results on MNIST, CIFAR-10, ImageNet and CelebA show that ACNs can yield improved dataset compression relative to orderagnostic generative models, with an upper bound of 73.9 nats per image on binarized MNIST. They also demonstrate that ACNs learn high-level features such as object class, writing style, pose and facial expression, which can be used to cluster and classify the data, as well as to generate diverse and convincing samples.", "title": "" }, { "docid": "a879b04fa12a7f26f4a9d30f4110183b", "text": "Due to the high volume of information and electronic documents on the Web, it is almost impossible for a human to study, research and analyze this volume of text. Summarizing the main idea and the major concept of the context enables the humans to read the summary of a large volume of text quickly and decide whether to further dig into details. Most of the existing summarization approaches have applied probability and statistics based techniques. But these approaches cannot achieve high accuracy. We observe that attention to the concept and the meaning of the context could greatly improve summarization accuracy, and due to the uncertainty that exists in the summarization methods, we simulate human like methods by integrating fuzzy logic with traditional statistical approaches in this study. The results of this study indicate that our approach can deal with uncertainty and achieve better results when compared with existing methods.", "title": "" }, { "docid": "49d164ec845f6201f56e18a575ed9436", "text": "This research explores a Natural Language Processing technique utilized for the automatic reduction of melodies: the Probabilistic Context-Free Grammar (PCFG). Automatic melodic reduction was previously explored by means of a probabilistic grammar [11] [1]. However, each of these methods used unsupervised learning to estimate the probabilities for the grammar rules, and thus a corpusbased evaluation was not performed. A dataset of analyses using the Generative Theory of Tonal Music (GTTM) exists [13], which contains 300 Western tonal melodies and their corresponding melodic reductions in tree format. In this work, supervised learning is used to train a PCFG for the task of melodic reduction, using the tree analyses provided by the GTTM dataset. The resulting model is evaluated on its ability to create accurate reduction trees, based on a node-by-node comparison with ground-truth trees. Multiple data representations are explored, and example output reductions are shown. Motivations for performing melodic reduction include melodic identification and similarity, efficient storage of melodies, automatic composition, variation matching, and automatic harmonic analysis.", "title": "" }, { "docid": "66b088871549d5ec924dbe500522d6f8", "text": "Being able to effectively measure similarity between patents in a complex patent citation network is a crucial task in understanding patent relatedness. In the past, techniques such as text mining and keyword analysis have been applied for patent similarity calculation. The drawback of these approaches is that they depend on word choice and writing style of authors. Most existing graph-based approaches use common neighbor-based measures, which only consider direct adjacency. In this work we propose new similarity measures for patents in a patent citation network using only the patent citation network structure. The proposed similarity measures leverage direct and indirect co-citation links between patents. A challenge is when some patents receive a large number of citations, thus are considered more similar to many other patents in the patent citation network. To overcome this challenge, we propose a normalization technique to account for the case where some pairs are ranked very similar to each other because they both are cited by many other patents. We validate our proposed similarity measures using US class codes for US patents and the well-known Jaccard similarity index. Experiments show that the proposed methods perform well when compared to the Jaccard similarity index.", "title": "" }, { "docid": "eeb31177629a38882fa3664ad0ddfb48", "text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification", "title": "" }, { "docid": "9efa07624d538272a5da844c74b2f56d", "text": "Electronic health records (EHRs), digitization of patients’ health record, offer many advantages over traditional ways of keeping patients’ records, such as easing data management and facilitating quick access and real-time treatment. EHRs are a rich source of information for research (e.g. in data analytics), but there is a risk that the published data (or its leakage) can compromise patient privacy. The k-anonymity model is a widely used privacy model to study privacy breaches, but this model only studies privacy against identity disclosure. Other extensions to mitigate existing limitations in k-anonymity model include p-sensitive k-anonymity model, p+-sensitive k-anonymity model, and (p, α)-sensitive k-anonymity model. In this paper, we point out that these existing models are inadequate in preserving the privacy of end users. Specifically, we identify situations where p+sensitive k-anonymity model is unable to preserve the privacy of individuals when an adversary can identify similarities among the categories of sensitive values. We term such attack as Categorical Similarity Attack (CSA). Thus, we propose a balanced p+-sensitive k-anonymity model, as an extension of the p+-sensitive k-anonymity model. We then formally analyze the proposed model using High-Level Petri Nets (HLPN) and verify its properties using SMT-lib and Z3 solver.We then evaluate the utility of release data using standard metrics and show that our model outperforms its counterparts in terms of privacy vs. utility tradeoff. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "18ce27c1840596779805efaeec18f3ed", "text": "Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST) is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS) is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD) were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy. OPEN ACCESS Remote Sens. 2014, 6 9830", "title": "" }, { "docid": "30e798ef3668df14f1625d40c53011a0", "text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d3c11fc96110e1ab0b801a5ba81133e1", "text": "Two experiments comparing user performance on ClearType and Regular displays are reported. In the first, 26 participants scanned a series of spreadsheets for target information. Speed of performance was significantly faster with ClearType. In the second experiment, 25 users read two articles for meaning. Reading speed was significantly faster for ClearType. In both experiments no differences in accuracy of performance or visual fatigue scores were observed. The data also reveal substantial individual differences in performance suggesting ClearType may not be universally beneficial to information workers.", "title": "" }, { "docid": "006793685095c0772a1fe795d3ddbd76", "text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.", "title": "" } ]
scidocsrr
b8224bd4396a30daa03b956720c6431b
Broadband design of a low-profile, circularly polarized crossed dipole antenna on an AMC surface
[ { "docid": "2de9a9887c9fe3bc1c750e0fb81934d7", "text": "An axial-mode helical antenna backed by a perfect electric conductor (PEC reflector) is optimized to radiate a circularly polarized (CP) wave, using the finite-difference time-domain method (FDTDM). After the optimization, the PEC reflector is replaced with a corrugated reflector. The effects of the corrugated reflector on the current distribution along the helical arm and the radiation pattern are investigated. A reduction in the backward radiation is attributed to the reduction in the current flowing over the rear surface of the corrugated reflector. A spiral antenna backed by a PEC reflector of finite extent is also analyzed using the FDTDM. As the antenna height decreases, the reverse current toward the feed point increases, resulting in deterioration of the axial ratio. To overcome this deterioration, the PEC reflector is replaced with an electromagnetic band-gap (EBG) reflector composed of mushroom-like elements. Analysis reveals that the spiral radiates a CP wave even when the spiral is located close to the reflector (0.06 wavelength above the EBG surface). The input impedance for the EBG reflector is more stable over a wide frequency band than that for the PEC reflector.", "title": "" }, { "docid": "3b36fa3a5cb177cb92921a15ce1820a0", "text": "The concept of a novel reactive impedance surface (RIS) as a substrate for planar antennas, that can miniaturize the size and significantly enhance both the bandwidth and the radiation characteristics of an antenna is introduced. Using the exact image formulation for the fields of elementary sources above impedance surfaces, it is shown that a purely reactive impedance plane with a specific surface reactance can minimize the interaction between the elementary source and its image in the RIS substrate. An RIS can be tuned anywhere between perfectly electric and magnetic conductor (PEC and PMC) surfaces offering a property to achieve the optimal bandwidth and miniaturization factor. It is demonstrated that RIS can provide performance superior to PMC when used as substrate for antennas. The RIS substrate is designed utilizing two-dimensional periodic printed metallic patches on a metal-backed high dielectric material. A simplified circuit model describing the physical phenomenon of the periodic surface is developed for simple analysis and design of the RIS substrate. Also a finite-difference time-domain (FDTD) full-wave analysis in conjunction with periodic boundary conditions and perfectly matched layer walls is applied to provide comprehensive study and analysis of complex antennas on such substrates. Examples of different planar antennas including dipole and patch antennas on RIS are considered, and their characteristics are compared with those obtained from the same antennas over PEC and PMC. The simulations compare very well with measured results obtained from a prototype /spl lambda//10 miniaturized patch antenna fabricated on an RIS substrate. This antenna shows measured relative bandwidth, gain, and radiation efficiency of BW=6.7, G=4.5 dBi, and e/sub r/=90, respectively, which constitutes the highest bandwidth, gain, and efficiency for such a small size thin planar antenna.", "title": "" }, { "docid": "784f3100dbd852b249c0e9b0761907f1", "text": "The bi-directional beam from an equiangular spiral antenna (EAS) is changed to a unidirectional beam using an electromagnetic band gap (EBG) reflector. The antenna height, measured from the upper surface of the EBG reflector to the spiral arms, is chosen to be extremely small to realize a low-profile antenna: 0.07 wavelength at the lowest analysis frequency of 3 GHz. The analysis shows that the EAS backed by the EBG reflector does not reproduce the inherent wideband axial ratio characteristic observed when the EAS is isolated in free space. The deterioration in the axial ratio is examined by decomposing the total radiation field into two field components: one component from the equiangular spiral and the other from the EBG reflector. The examination reveals that the amplitudes and phases of these two field components do not satisfy the constructive relationship necessary for circularly polarized radiation. Based on this finding, next, the EBG reflector is modified by gradually removing the patch elements from the center region of the reflector, thereby satisfying the required constructive relationship between the two field components. This equiangular spiral with a modified EBG reflector shows wideband characteristics with respect to the axial ratio, input impedance and gain within the design frequency band (4-9 GHz). Note that, for comparison, the antenna characteristics for an EAS isolated in free space and an EAS backed by a perfect electric conductor are also presented.", "title": "" }, { "docid": "160e06b33d6db64f38480c62989908fb", "text": "A theoretical and experimental study has been performed on a low-profile, 2.4-GHz dipole antenna that uses a frequency-selective surface (FSS) with varactor-tuned unit cells. The tunable unit cell is a square patch with a small aperture on either side to accommodate the varactor diodes. The varactors are placed only along one dimension to avoid the use of vias and simplify the dc bias network. An analytical circuit model for this type of electrically asymmetric unit cell is shown. The measured data demonstrate tunability from 2.15 to 2.63 GHz with peak gains at broadside that range from 3.7- to 5-dBi and instantaneous bandwidths of 50 to 280 MHz within the tuning range. It is shown that tuning for optimum performance in the presence of a human-core body phantom can be achieved. The total antenna thickness is approximately λ/45.", "title": "" } ]
[ { "docid": "45d6edf2984165e2ed6996c0987f96fc", "text": "Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x - and y -coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.", "title": "" }, { "docid": "774690eaef2d293320df0c162f44af95", "text": "Having a long historical past in traditional Chinese medicine, Ganoderma Lucidum (G. Lucidum) is a type of mushroom believed to extend life and promote health. Due to the increasing consumption pattern, it has been cultivated and marketed intensively since the 1970s. It is claimed to be effective in the prevention and treatment of many diseases, and in addition, it exerts anticancer properties. Almost all the data on the benefits of G. Lucidum are based on laboratory and preclinical studies. The few clinical studies conducted are questionable. Nevertheless, when the findings obtained from laboratory studies are considered, it turns that G. Lucidum is likely to have some benefits for cancer patients. What is important at this point is to determine the components that will provide these benefits, and use them in drug development, after testing their reliability. In conclusion, it would be the right approach to abstain from using and incentivizing this product, until its benefits and harms are set out clearly, by considering its potential side effects.", "title": "" }, { "docid": "c01b022ed57fa44e9ad0d652f8afac0b", "text": "Complexityof cyanobacterial exopolysaccharides: composition, structures, inducing factors andputative genes involved in their biosynthesis andassembly Sara Pereira, Andrea Zille, Ernesto Micheletti, Pedro Moradas-Ferreira, Roberto De Philippis & Paula Tamagnini IBMC – Instituto de Biologia Molecular e Celular, Universidade do Porto, Porto, Portugal; Departamento de Botânica, Faculdade de Ciências, Universidade do Porto, Porto, Portugal; Department of Agricultural Biotechnology, University of Florence, Florence, Italy; and Instituto de Ciências Biomédicas Abel Salazar (ICBAS), Universidade do Porto, Porto, Portugal", "title": "" }, { "docid": "4825ada359be4788a52f1fd616142a19", "text": "Attachment theory is extended to pertain to developmental changes in the nature of children's attachments to parents and surrogate figures during the years beyond infancy, and to the nature of other affectional bonds throughout the life cycle. Various types of affectional bonds are examined in terms of the behavioral systems characteristic of each and the ways in which these systems interact. Specifically, the following are discussed: (a) the caregiving system that underlies parents' bonds to their children, and a comparison of these bonds with children's attachments to their parents; (b) sexual pair-bonds and their basic components entailing the reproductive, attachment, and caregiving systems; (c) friendships both in childhood and adulthood, the behavioral systems underlying them, and under what circumstances they may become enduring bonds; and (d) kinship bonds (other than those linking parents and their children) and why they may be especially enduring.", "title": "" }, { "docid": "c2402cea6e52ee98bc0c3de084580194", "text": "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.", "title": "" }, { "docid": "909ec68a644cfd1d338270ee67144c23", "text": "We have constructed an optical tweezer using two lasers (830 nm and 1064 nm) combined with micropipette manipulation having sub-pN force sensitivity. Sample position is controlled within nanometer accuracy using XYZ piezo-electric stage. The position of the bead in the trap is monitored using single particle laser backscattering technique. The instrument is automated to operate in constant force, constant velocity or constant position measurement. We present data on single DNA force-extension, dynamics of DNA integration on membranes and optically trapped bead–cell interactions. A quantitative analysis of single DNA and protein mechanics, assembly and dynamics opens up new possibilities in nanobioscience.", "title": "" }, { "docid": "3fdd81a3e2c86f43152f72e159735a42", "text": "Class imbalance learning tackles supervised learning problems where some classes have significantly more examples than others. Most of the existing research focused only on binary-class cases. In this paper, we study multiclass imbalance problems and propose a dynamic sampling method (DyS) for multilayer perceptrons (MLP). In DyS, for each epoch of the training process, every example is fed to the current MLP and then the probability of it being selected for training the MLP is estimated. DyS dynamically selects informative data to train the MLP. In order to evaluate DyS and understand its strength and weakness, comprehensive experimental studies have been carried out. Results on 20 multiclass imbalanced data sets show that DyS can outperform the compared methods, including pre-sample methods, active learning methods, cost-sensitive methods, and boosting-type methods.", "title": "" }, { "docid": "22572394c6f522b70e1f14b8156a5601", "text": "A new substrate integrated horn antenna with hard side walls combined with a couple of soft surfaces is introduced. The horn takes advantage of the air medium for propagation inside, while having a thickness of dielectric on the walls to realize hard conditions. The covering layers of the air-filled horn are equipped with strip-via arrays, which act as soft surfaces around the horn aperture to reduce the back radiations. The uniform amplitude distribution of the aperture resulting from the hard conditions and the phase correction combined with the profiled horn walls provided a narrow beamwidth and −13 dB sidelobe levels in the frequency of the hard condition, which is validated by the simulated and measured results.", "title": "" }, { "docid": "de831adaf4a05d58c41cd5f75dfee769", "text": "In men, high levels of endogenous testosterone (T) seem to encourage behavior intended to dominate--to enhance one's status over--other people. Sometimes dominant behavior is aggressive, its apparent intent being to inflict harm on another person, but often dominance is expressed nonaggressively. Sometimes dominant behavior takes the form of antisocial behavior, including rebellion against authority and low breaking. Measurement of T at a single point in time, presumably indicative of a man's basal T level, predicts many of these dominant or antisocial behaviors. T not only affects behavior but also responds to it. The act of competing for dominant status affects male T levels in two ways. First, T rises in the face of a challenge, as if it were an anticipatory response to impending competition. Second, after the competition, T rises in winners and declines in losers. Thus, there is a reciprocity between T and dominance behavior, each affecting the other. We contrast a reciprocal model, in which T level is variable, acting as both a cause and effect of behavior, with a basal model, in which T level is assumed to be a persistent trait that influences behavior. An unusual data set on Air Force veterans, in which data were collected four times over a decade, enables us to compare the basal and reciprocal models as explanations for the relationship between T and divorce. We discuss sociological implications of these models.", "title": "" }, { "docid": "dc6a03c3c2831e5912fdeac2a4e22ae5", "text": "Topic models jointly learn topics and document-level topic distribution. Extrinsic evaluation of topic models tends to focus exclusively on topic-level evaluation, e.g. by assessing the coherence of topics. We demonstrate that there can be large discrepancies between topicand documentlevel model quality, and that basing model evaluation on topic-level analysis can be highly misleading. We propose a method for automatically predicting topic model quality based on analysis of documentlevel topic allocations, and provide empirical evidence for its robustness.", "title": "" }, { "docid": "af45d1bbdcbd94bbe5ae2cc0936f3650", "text": "Rationale: The imidazopyridine hypnotic zolpidem may produce less memory and cognitive impairment than classic benzodiazepines, due to its relatively low binding affinity for the benzodiazepine receptor subtypes found in areas of the brain which are involved in learning and memory. Objectives: The study was designed to compare the acute effects of single oral doses of zolpidem (5, 10, 20 mg/70 kg) and the benzodiazepine hypnotic triazolam (0.125, 0.25, and 0.5 mg/70 kg) on specific memory and attentional processes. Methods: Drug effects on memory for target (i.e., focal) information and contextual information (i.e., peripheral details surrounding a target stimulus presentation) were evaluated using a source monitoring paradigm, and drug effects on selective attention mechanisms were evaluated using a negative priming paradigm, in 18 healthy volunteers in a double-blind, placebo-controlled, crossover design. Results: Triazolam and zolpidem produced strikingly similar dose-related effects on memory for target information. Both triazolam and zolpidem impaired subjects’ ability to remember whether a word stimulus had been presented to them on the computer screen or whether they had been asked to generate the stimulus based on an antonym cue (memory for the origin of a stimulus, which is one type of contextual information). The results suggested that triazolam, but not zolpidem, impaired memory for the screen location of picture stimuli (spatial contextual information). Although both triazolam and zolpidem increased overall reaction time in the negative priming task, only triazolam increased the magnitude of negative priming relative to placebo. Conclusions: The observed differences between triazolam and zolpidem have implications for the cognitive and pharmacological mechanisms underlying drug-induced deficits in specific memory and attentional processes, as well for the cognitive and brain mechanisms underlying these processes.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "f5c69697719fe04f29bbdcb2efa9d160", "text": "We propose that late modern policing practices, that rely on neighbourhood intelligence, the monitoring of tensions, surveillance and policing by accommo-dation, need to be augmented in light of emerging ‘cyber-neighbourhoods’, namely social media networks. The 2011 riots in England were the first to evidence the widespread use of social media platforms to organise and respond to disorder. The police were ill-equipped to make use of the intelligence emerging from these non-terrestrial networks and were found to be at a disadvantage to the more tech-savvy rioters and the general public. In this paper, we outline the development of the ‘tension engine’ component of the Cardiff Online Social Media ObServatroy (COSMOS). This engine affords users with the ability to monitor social media data streams for signs of high tension which can be analysed in order to identify deviations from the ‘norm’ (levels of cohesion/low tension). This analysis can be overlaid onto a palimpsest of curated data, such as official statistics about neighbourhood crime, deprivation and demography, to provide a multidimensional picture of the ‘terrestrial’ and ‘cyber’ streets. As a consequence, this ‘neighbourhood informatics’ enables a means of questioning official constructions of civil unrest through reference to the user-generated accounts of social media and their relationship to other, curated, social and economic data.", "title": "" }, { "docid": "a25bf5c496794bce4b3918d00616f632", "text": "We used adaptive network theory to extend the Rescorla-Wagner (1972) least mean squares (LMS) model of associative learning to phenomena of human learning and judgment. In three experiments subjects learned to categorize hypothetical patients with particular symptom patterns as having certain diseases. When one disease is far more likely than another, the model predicts that subjects will substantially overestimate the diagnosticity of the more valid symptom for the rare disease. The results of Experiments 1 and 2 provide clear support for this prediction in contradistinction to predictions from probability matching, exemplar retrieval, or simple prototype learning models. Experiment 3 contrasted the adaptive network model with one predicting pattern-probability matching when patients always had four symptoms (chosen from four opponent pairs) rather than the presence or absence of each of four symptoms, as in Experiment 1. The results again support the Rescorla-Wagner LMS learning rule as embedded within an adaptive network model.", "title": "" }, { "docid": "e830098f9c045d376177e6d2644d4a06", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "a2d76e1217b0510f82ebccab39b7d387", "text": "The floating photovoltaic system is a new concept in energy technology to meet the needs of our time. The system integrates existing land based photovoltaic technology with a newly developed floating photovoltaic technology. K-water has already completed two floating photovoltaic systems that enable generation of 100kW and 500kW respectively. In this paper, the generation efficiency of floating and land photovoltaic systems were compared and analyzed. Floating PV has shown greater generation efficiency by over 10% compared with the general PV systems installed overland", "title": "" }, { "docid": "76d10dbe734c5a1341dd914a4fdcc1af", "text": "This paper describes novel highly mobile small robots called “Mini-Whegs” that can run and jump (see video). They are derived from our larger Whegs series of robots, which benefit from abstracted cockroach locomotion principles. Key to their success are the three spoked appendages, called “whegs,” which combine the speed and simplicity of wheels with the climbing mobility of legs. To be more compact than the larger Whegs vehicles, Mini-Whegs uses four whegs in an alternating diagonal gait. These 9 cm long robots can run at sustained speeds of over 10 body lengths per second and climb obstacles that are taller than their leg length. They can run forward and backward, on either side. Their robust construction allows them to tumble down a flight of stairs with no damage and carry a payload equal to twice their weight. A jumping mechanism has also been developed that enables Mini-Whegs to surmount much larger obstacles, such as stair steps.", "title": "" }, { "docid": "88a10ea3bae30f371c3f6276beff9e58", "text": "This research is a part of smart farm system in the framework of precision agriculture. The system was installed and tested over a year. The tractor tracking system employs the Global Positioning System (GPS) and ZigBee wireless network based on mesh topology to make the system communicate covering a large area. Router nodes are used for re-transmission of data in the network. A software was developed for acquiring data from tractor, storing data and displaying in real time on a web site.", "title": "" }, { "docid": "ff8e0739931441ffea95c445bd648e2f", "text": "OBJECTIVE\nNearly 38% of U.S. adults use complementary and alternative medicine approaches to manage physical conditions (e.g., chronic pain, arthritis, cancer, heart disease, and high blood pressure) and psychological or emotional health concerns (e.g., post-traumatic stress disorder, anxiety, and depression). Research evidence has accumulated for yoga as an effective treatment approach for these conditions. Further, yoga has increased in popularity among healthcare providers and the general population. Given these trends, this study explored perceptions about yoga as a viable complementary treatment to which health professions students would refer patients.\n\n\nPARTICIPANTS\nMore than 1500 students enrolled in health professions programs at a Pacific Northwest school were enrolled; data were obtained from 478 respondents.\n\n\nDESIGN\nThe study assessed willingness to refer patients to yoga as a complementary and alternative medicine for 27 symptoms (identified in the literature as having evidence for yoga's utility), which were subsequently grouped into skeletal, physical, and psychological on the basis of factor analysis. Responses were assessed using a mixed-model analysis of variance with health profession and yoga practitioner as between-subjects variables and symptoms as a within-subjects factor.\n\n\nRESULTS\nIn descending order of likelihood to refer patients to yoga were students in occupational therapy, physician assistant program, psychology, physical therapy, pharmacy, dental hygiene, speech and audiology, and optometry. All groups perceived yoga's greatest utility for skeletal symptoms, followed by psychological and physical symptoms. Findings also revealed a significant positive relationship between level of personal yoga practice and willingness to refer patients to yoga.\n\n\nCONCLUSIONS\nAlthough students expressed some openness to referring patients to yoga, ratings of appropriateness were not accurately aligned with extant evidence base. Personal experience seemed to be a salient factor for accepting yoga as a referral target. These findings suggest the importance of developing strategies to make health professionals more aware of the merits of yoga, regardless of whether they themselves are yoga practitioners.", "title": "" }, { "docid": "cce2e8ee8e62bb5ef4b4fc36756a3f50", "text": "For the development and operating efficiency of Web applications based on the Model-View-Controller (MVC) framework, and, according to the actual business environment and needs in the project practice, the framework of Web application system is studied in this paper. Through the research of Spring MVC framework and Mybatis framework as well as some related core techniques, combined with JSP and JSTL technology, this paper realizes the design of a lightweight Web application framework based on Spring MVC and Mybatis.", "title": "" } ]
scidocsrr
f0cfad19974658135641eee18ab40948
Changes in Self-Definition Impede Recovery From Rejection.
[ { "docid": "070ecf3890362cb4c24682aff5fa01c6", "text": "This review builds on self-control theory (Carver & Scheier, 1998) to develop a theoretical framework for investigating associations of implicit theories with self-regulation. This framework conceptualizes self-regulation in terms of 3 crucial processes: goal setting, goal operating, and goal monitoring. In this meta-analysis, we included articles that reported a quantifiable assessment of implicit theories and at least 1 self-regulatory process or outcome. With a random effects approach used, meta-analytic results (total unique N = 28,217; k = 113) across diverse achievement domains (68% academic) and populations (age range = 5-42; 10 different nationalities; 58% from United States; 44% female) demonstrated that implicit theories predict distinct self-regulatory processes, which, in turn, predict goal achievement. Incremental theories, which, in contrast to entity theories, are characterized by the belief that human attributes are malleable rather than fixed, significantly predicted goal setting (performance goals, r = -.151; learning goals, r = .187), goal operating (helpless-oriented strategies, r = -.238; mastery-oriented strategies, r = .227), and goal monitoring (negative emotions, r = -.233; expectations, r = .157). The effects for goal setting and goal operating were stronger in the presence (vs. absence) of ego threats such as failure feedback. Discussion emphasizes how the present theoretical analysis merges an implicit theory perspective with self-control theory to advance scholarship and unlock major new directions for basic and applied research.", "title": "" }, { "docid": "50f2df90b40ccd80fb687f67288d3a96", "text": "Four experiments examined the functional relationship between interpersonal appraisal and subjective feelings about oneself. Participants imagined receiving one of several positive or negative reactions from another person (Experiments 1, 2, and 3) or actually received interpersonal evaluations (Experiment 4), then completed measures relevant to state self-esteem. All 4 studies showed that subjective feelings were a curvilinear, ogival function of others' appraisals. Although trait self-esteem correlated with state reactions as a main effect, it did not moderate participants' reactions to interpersonal feedback.", "title": "" } ]
[ { "docid": "c3cc032538a10ab2f58ff45acb6d16d0", "text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.", "title": "" }, { "docid": "62a7cf86e1e0f36b77cd606e1c3ea1f7", "text": "Mastering 3D Printing shows you how to get the most out of your printer, including how to design models, choose materials, work with different printers, and integrate 3D printing with traditional prototyping to make techniques like sand casting more efficient. you’ve printed key chains. you’ve printed simple toys. now you’re ready to innovate with your 3D printer to start a business or teach and inspire others. Joan horvath has been an educator, engineer, author, and startup 3D printing company team member. She shows you all of the technical details you need to know to go beyond simple model printing to make your 3D printer work for you as a prototyping device, a teaching tool, or a business machine.", "title": "" }, { "docid": "4f557240199e1847747bb13745fc9717", "text": "BACKGROUND\nFew studies compare instructor-modeled learning with modified debriefing to self-directed learning with facilitated debriefing during team-simulated clinical scenarios.\n\n\nOBJECTIVE\n: To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios (group A) has better outcomes compared with instructor-modeled learning with modified debriefing (group B).\n\n\nMETHODS\nThis study used a convenience sample of students. The four tools used assessed pre/post knowledge, satisfaction, technical, and team behaviors. Thirteen interdisciplinary student teams participated: seven in group A and six in group B. Student teams consisted of one nurse practitioner student, one registered nurse student, one social work student, and one respiratory therapy student. The Knowledge Assessment Tool was analyzed by student profession.\n\n\nRESULTS\nThere were no statistically significant differences within each student profession group on the Knowledge Assessment Tool. Group B was significantly more satisfied than group A (P = 0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0 +/- 0.50 vs. 26.2 +/- 3.0, P = 0.03 and 28.0 +/- 2.0 vs. 24.0 +/- 3.3, P = 0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores.\n\n\nCONCLUSION\nThe data suggest that instructor-modeling learning with modified debriefing is more effective than self-directed learning with facilitated debriefing during team-simulated clinical scenarios.", "title": "" }, { "docid": "e2d6dbce669a6d177b68a5660e4821b5", "text": "In this letter, a novel slot-coupling feeding technique is used to realize a dual-polarized 2 × 1 microstrip stacked patch array for mobile wireless communication systems. The array is intended as a basic module for base-station linear arrays, whose final size depending on beamwidth and gain requirements. Each array element is fed through two microstrip lines arranged on the basis of a sequential rotation technique. Each stacked square patch is excited through a square ring slot realized in the feeding network ground plane. Design procedure, simulation results and measurement data are presented for a 2 × 1 array working in the GSM 1800-1900 band (1710-1910 MHz), UMTS band (1920-2170 MHz), ISM band (2400-2484 MHz), and UMTS 3G expansion band (2500-2690 MHz) or, alternatively, WiMAX band (2300-2700 MHz), with a resulting 45% percentage bandwidth (reflection coefficient <; -10 dB). Due to both the symmetry properties of the novel slot-coupling feeding configuration and the implementation of a sequential rotation technique, good results have been obtained in terms of port isolation and cross-polar radiation patterns.", "title": "" }, { "docid": "e14d4405a6da0cd4f1ee1beaeeed0fba", "text": "Source code search plays an important role in software maintenance. The effectiveness of source code search not only relies on the search technique, but also on the quality of the query. In practice, software systems are large, thus it is difficult for a developer to format an accurate query to express what really in her/his mind, especially when the maintainer and the original developer are not the same person. When a query performs poorly, it has to be reformulated. But the words used in a query may be different from those that have similar semantics in the source code, i.e., the synonyms, which will affect the accuracy of code search results. To address this issue, we propose an approach that extends a query with synonyms generated from WordNet. Our approach extracts natural language phrases from source code identifiers, matches expanded queries with these phrases, and sorts the search results. It allows developers to explore word usage in a piece of software, helps them quickly identify relevant program elements for investigation or quickly recognize alternative words for query reformulation. Our initial empirical study on search tasks performed on the JavaScript/ECMAScript interpreter and compiler, Rhino, shows that the synonyms used to expand the queries help recommend good alternative queries. Our approach also improves the precision and recall of Conquer, a state-of-the-art query expansion/reformulation technique, by 5% and 8% respectively.", "title": "" }, { "docid": "1231b1e1e0ace856815e32dbdc38a113", "text": "Availability of cloud systems is one of the main concerns of cloud computing. The term, availability of clouds, is mainly evaluated by ubiquity of information comparing with resource scaling. In clouds, load balancing, as a method, is applied across different data centers to ensure the network availability by minimizing use of computer hardware, software failures and mitigating recourse limitations. This work discusses the load balancing in cloud computing and then demonstrates a case study of system availability based on a typical Hospital Database Management solution.", "title": "" }, { "docid": "817f9509afcdbafc60ecac2d0b8ef02d", "text": "Abstract—In most regards, the twenty-first century may not bring revolutionary changes in electronic messaging technology in terms of applications or protocols. Security issues that have long been a concern in messaging application are finally being solved using a variety of products. Web-based messaging systems are rapidly evolving the text-based conversation. The users have the right to protect their privacy from the eavesdropper, or other parties which interferes the privacy of the users for such purpose. The chatters most probably use the instant messages to chat with others for personal issue; in which no one has the right eavesdrop the conversation channel and interfere this privacy. This is considered as a non-ethical manner and the privacy of the users should be protected. The author seeks to identify the security features for most public instant messaging services used over the internet and suggest some solutions in order to encrypt the instant messaging over the conversation channel. The aim of this research is to investigate through forensics and sniffing techniques, the possibilities of hiding communication using encryption to protect the integrity of messages exchanged. Authors used different tools and methods to run the investigations. Such tools include Wireshark packet sniffer, Forensics Tool Kit (FTK) and viaForensic mobile forensic toolkit. Finally, authors will report their findings on the level of security that encryption could provide to instant messaging services.", "title": "" }, { "docid": "85d8b05b8292bedb0e22feb1b26a31b5", "text": "We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.", "title": "" }, { "docid": "1f1fd7217ed5bae04f9ac6f8ccc8c23f", "text": "Relating the brain's structural connectivity (SC) to its functional connectivity (FC) is a fundamental goal in neuroscience because it is capable of aiding our understanding of how the relatively fixed SC architecture underlies human cognition and diverse behaviors. With the aid of current noninvasive imaging technologies (e.g., structural MRI, diffusion MRI, and functional MRI) and graph theory methods, researchers have modeled the human brain as a complex network of interacting neuronal elements and characterized the underlying structural and functional connectivity patterns that support diverse cognitive functions. Specifically, research has demonstrated a tight SC-FC coupling, not only in interregional connectivity strength but also in network topologic organizations, such as community, rich-club, and motifs. Moreover, this SC-FC coupling exhibits significant changes in normal development and neuropsychiatric disorders, such as schizophrenia and epilepsy. This review summarizes recent progress regarding the SC-FC relationship of the human brain and emphasizes the important role of large-scale brain networks in the understanding of structural-functional associations. Future research directions related to this topic are also proposed.", "title": "" }, { "docid": "16eb96556543a9d168693e4ce07ff1d2", "text": "This paper addresses the problem of unsupervised video summarization, formulated as selecting a sparse subset of video frames that optimally represent the input video. Our key idea is to learn a deep summarizer network to minimize distance between training videos and a distribution of their summarizations, in an unsupervised way. Such a summarizer can then be applied on a new video for estimating its optimal summarization. For learning, we specify a novel generative adversarial framework, consisting of the summarizer and discriminator. The summarizer is the autoencoder long short-term memory network (LSTM) aimed at, first, selecting video frames, and then decoding the obtained summarization for reconstructing the input video. The discriminator is another LSTM aimed at distinguishing between the original video and its reconstruction from the summarizer. The summarizer LSTM is cast as an adversary of the discriminator, i.e., trained so as to maximally confuse the discriminator. This learning is also regularized for sparsity. Evaluation on four benchmark datasets, consisting of videos showing diverse events in first-and third-person views, demonstrates our competitive performance in comparison to fully supervised state-of-the-art approaches.", "title": "" }, { "docid": "cd13c8d9b950c35c73aeaadd2cfa1efb", "text": "The significant worldwide increase in observed river runoff has been tentatively attributed to the stomatal \"antitranspirant\" response of plants to rising atmospheric CO(2) [Gedney N, Cox PM, Betts RA, Boucher O, Huntingford C, Stott PA (2006) Nature 439: 835-838]. However, CO(2) also is a plant fertilizer. When allowing for the increase in foliage area that results from increasing atmospheric CO(2) levels in a global vegetation model, we find a decrease in global runoff from 1901 to 1999. This finding highlights the importance of vegetation structure feedback on the water balance of the land surface. Therefore, the elevated atmospheric CO(2) concentration does not explain the estimated increase in global runoff over the last century. In contrast, we find that changes in mean climate, as well as its variability, do contribute to the global runoff increase. Using historic land-use data, we show that land-use change plays an additional important role in controlling regional runoff values, particularly in the tropics. Land-use change has been strongest in tropical regions, and its contribution is substantially larger than that of climate change. On average, land-use change has increased global runoff by 0.08 mm/year(2) and accounts for approximately 50% of the reconstructed global runoff trend over the last century. Therefore, we emphasize the importance of land-cover change in forecasting future freshwater availability and climate.", "title": "" }, { "docid": "e7b5662d3ea320f6d86f0ad8a2755f9a", "text": "Meander-line polarizer for a half-wave vibrator is presented. Researched antenna was simulated and prototype was manufactured. The comparison of the theoretical and experimental data for investigated antenna (axial ratio) was produced. Influence of polarizer on half-wave vibrator matching characteristics was researched.", "title": "" }, { "docid": "fb53b5d48152dd0d71d1816a843628f6", "text": "Online banking and e-commerce have been experiencing rapid growth over the past few years and show tremendous promise of growth even in the future. This has made it easier for fraudsters to indulge in new and abstruse ways of committing credit card fraud over the Internet. This paper focuses on real-time fraud detection and presents a new and innovative approach in understanding spending patterns to decipher potential fraud cases. It makes use of Self Organization Map to decipher, filter and analyze customer behavior for detection of fraud.", "title": "" }, { "docid": "63046d1ca19a158052a62c8719f5f707", "text": "Cloud machine learning (CML) techniques offer contemporary machine learning services, with pre-trained models and a service to generate own personalized models. This paper presents a completely unique emotional modeling methodology for incorporating human feeling into intelligent systems. The projected approach includes a technique to elicit emotion factors from users, a replacement illustration of emotions and a framework for predicting and pursuit user’s emotional mechanical phenomenon over time. The neural network based CML service has better training concert and enlarged exactness compare to other large scale deep learning systems. Opinions are important to almost all human activities and cloud based sentiment analysis is concerned with the automatic extraction of sentiment related information from text. With the rising popularity and availability of opinion rich resources such as personal blogs and online appraisal sites, new opportunities and issues arise as people now, actively use information technologies to explore and capture others opinions. In the existing system, a segmentation ranking model is designed to score the usefulness of a segmentation candidate for sentiment classification. A classification model is used for predicting the sentiment polarity of segmentation. The joint framework is trained directly using the sentences annotated with only sentiment polarity, without the use of any syntactic or sentiment annotations in segmentation level. However the existing system still has issue with classification accuracy results. To improve the classification performance, in the proposed system, cloud integrate the support vector machine, naive bayes and neural network algorithms along with joint segmentation approaches has been proposed to classify the very positive, positive, neutral, negative and very negative features more effectively using important feature selection. Also to handle the outliers we apply modified k-means clustering method on the given dataset. It is used to cloud cluster the outliers and hence the label as well as unlabeled features is handled efficiently. From the experimental result, we conclude that the proposed system yields better performance than the existing system.", "title": "" }, { "docid": "dba434bb452d16be5453053ae0a7915d", "text": "QR code is a popular form of barcode pattern that is ubiquitously used to tag information to products or for linking advertisements. While, on one hand, it is essential to keep the patterns machine-readable; on the other hand, even small changes to the patterns can easily render them unreadable. Hence, in absence of any computational support, such QR codes appear as random collections of black/white modules, and are often visually unpleasant. We propose an approach to produce high quality visual QR codes, which we call halftone QR codes, that are still machine-readable. First, we build a pattern readability function wherein we learn a probability distribution of what modules can be replaced by which other modules. Then, given a text tag, we express the input image in terms of the learned dictionary to encode the source text. We demonstrate that our approach produces high quality results on a range of inputs and under different distortion effects.", "title": "" }, { "docid": "2119a6fcc721124690d6cc2fe6552724", "text": "A development of humanoid robot HRP-2 is presented in this paper. HRP-2 is a humanoid robotics platform, which we developed in phase two of HRP. HRP was a humanoid robotics project, which had run by the Ministry of Economy, Trade and Industry (METI) of Japan from 1998FY to 2002FY for five years. The ability of the biped locomotion of HRP-2 is improved so that HRP-2 can cope with uneven surface, can walk at two third level of human speed, and can walk on a narrow path. The ability of whole body motion of HRP-2 is also improved so that HRP-2 can get up by a humanoid robot's own self if HRP-2 tips over safely. In this paper, the appearance design, the mechanisms, the electrical systems, specifications, and features upgraded from its prototype are also introduced.", "title": "" }, { "docid": "ce13a3e19ab19e34e3839b0d882379ae", "text": "We argue that extrapolation to examples outside the training space will often be easier for models that capture global structures, rather than just maximise their local fit to the training data. We show that this is true for two popular models: the Decomposable Attention Model and word2vec.", "title": "" }, { "docid": "42ab434d5628a3bfc01ca866c85b2545", "text": "This work discusses the design of a GaN power amplifier demonstrating high efficiency over more than a decade bandwidth using coaxial baluns and transformer matching networks to achieve over a 50MHz-500 MHz bandwidth. The power amplifier demonstrates a power added efficiency of 83%-64% over full bandwidth with 15 dB compressed gain at peak PAE.", "title": "" }, { "docid": "ebb43198da619d656c068f2ab1bfe47f", "text": "Remote data integrity checking (RDIC) enables a server to prove to an auditor the integrity of a stored file. It is a useful technology for remote storage such as cloud storage. The auditor could be a party other than the data owner; hence, an RDIC proof is based usually on publicly available information. To capture the need of data privacy against an untrusted auditor, Hao et al. formally defined “privacy against third party verifiers” as one of the security requirements and proposed a protocol satisfying this definition. However, we observe that all existing protocols with public verifiability supporting data update, including Hao et al.’s proposal, require the data owner to publish some meta-data related to the stored data. We show that the auditor can tell whether or not a client has stored a specific file and link various parts of those files based solely on the published meta-data in Hao et al.’s protocol. In other words, the notion “privacy against third party verifiers” is not sufficient in protecting data privacy, and hence, we introduce “zero-knowledge privacy” to ensure the third party verifier learns nothing about the client’s data from all available information. We enhance the privacy of Hao et al.’s protocol, develop a prototype to evaluate the performance and perform experiment to demonstrate the practicality of our proposal.", "title": "" }, { "docid": "b876e62db8a45ab17d3a9d217e223eb7", "text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.", "title": "" } ]
scidocsrr
caef54de34dfb859095d871f99ebc024
Object Detectors Emerge in Deep Scene CNNs
[ { "docid": "509c4b0d3cfd457b1ef22ee5de1830b8", "text": "Convolutional neural nets (convnets) trained from massive labeled datasets [1] have substantially improved the state-of-the-art in image classification [2] and object detection [3]. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass aligment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011 [4].", "title": "" } ]
[ { "docid": "4e285c4525b938b89759675823c244ff", "text": "We investigated preschoolers’ selective learning from models that had previously appeared to be reliable or unreliable. Replicating previous research, children from 4 years selectively learned novel words from reliable over unreliable speakers. Extending previous research, children also selectively learned other kinds of acts – novel games – from reliable actors. More important, – and novel to this study, this selective learning was not just based on a preference for one model or one kind of act, but had a normative dimension to it. Children understood the way a reliable actor demonstrated an act not only as the better one, but as the normatively appropriate or correct one, as indicated in both their explicit verbal comments and their spontaneous normative interventions (e.g., protest, critique) in response to third-party acts deviating from the one demonstrated. These findings are discussed in the broader context of the development of children’s social cognition and cultural learning. © 2008 Elsevier Inc. All rights reserved. Much of what we know and do we have learned from others. This process of cultural learning has its roots in earliest infancy, when imitation begins. From the second year, infants begin to imitatively learn instrumental, playful, symbolic and other kinds of acts from adults (Carpenter, Nagell, & Tomasello, 1998; Casler & Kelemen, 2005; Gergely, Bekkering, & Király, 2002; Meltzoff, 1995). When imitating others, even young children seem not to be confined to re-enact merely idiosyncratic intentional acts of an individual. Rather, they learn something about general forms of actions, with such forms being structured by normative dimensions of appropriate and inappropriate performance. An indirect indicator of such an understanding can be seen, for example, in the phenomenon of functional fixedness: ∗ Corresponding author at: Max Planck Institute for Evolutionary Anthropology, Department of Developmental and Comparative Psychology, Deutscher Platz 6, D-04103 Leipzig, Germany. Tel.: +49 341 3550 449; fax: +49 341 3550 444. E-mail address: rakoczy@eva.mpg.de (H. Rakoczy). 0885-2014/$ – see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.cogdev.2008.07.004 62 H. Rakoczy et al. / Cognitive Development 24 (2009) 61–69 When children from 2 years see someone use a novel object systematically in an instrumental way, they not only use the object in similar ways themselves later on, but only use it for this purpose and assume other people will do so as well (Casler & Kelemen, 2005). On a rich interpretation, this could be taken to show that children not only understand what the other person was up to, but also understand how one appropriately acts with the tool because that is what it is for. While such a rich reading of the functional fixedness data is not necessarily warranted (children could fixate on a way of treating the object they merely see as usual, but not necessarily as normatively licensed), recent work has documented children’s learning of novel acts with normative structure in more direct ways. In a set of studies (Rakoczy, 2008; Rakoczy, Warneken, & Tomasello, 2008), young children (age 2 and 3) first saw an experimenter demonstrate a novel simple rule game (called, e.g., “daxing”). In the course of this demonstration, the experimenter performed two kinds of acts, one of which was marked as the proper game (“This is daxing”), while the other one was marked as an accident (“Whoops!”). Subsequently, children not only learned to play the game imitatively themselves; they also indicated that they understood the demonstrated way to play the game as the normatively correct one by criticizing third parties that announced their participation in “daxing” and then performed inappropriate acts. This normative understanding, furthermore, involves some basic sensitivity to the context of the actions: In one control condition, when the model performed the same kinds of behaviours but these were all neutrally marked (as unspecific acts), children did not jump to any normative conclusions and did not criticize third parties. In another control condition, the demonstration and the act of the third party were exactly alike, but the announcement of the third party was different: She announced that she did not want to participate in the game (and thus her subsequent act did not constitute a mistake). Obviously taking this announcement into account, children now did not criticize her. Young preschoolers thus are not only social learners; they are also normative learners in rudimentary form. But how sophisticated and specific are young children’s abilities to engage in cultural normative learning? In particular, apart from some rudimentary context-specificity (mentioned above), how systematic and selective is young children’s learning of normatively structured activities from others? Selectivity in learning from different kinds of models has been the focus of much recent research in social cognitive development (for overviews, see Koenig & Harris, 2005a). Numerous studies have revealed that children from around 3 to 4 take into account different properties of models when having to select between two models in novel word learning situations. First, children are sensitive to expressions of knowledge versus ignorance, preferring knowledgeable models over ignorant ones (Koenig & Harris, 2005b; Studies 2 and 3; Sabbagh & Baldwin, 2001, Study 1). Second, children take into account expressed (un-) certainty and confidence, selectively trusting confident and certain models (Birch, Frampton, & Akmal, 2006; Matsui, Yamamoto, & McCagg, 2006; Moore, Bryant, & Furrow, 1989; Sabbagh & Baldwin, 2001; Study 2). Third, children prefer adult over peer models when learning novel words (Jaswal & Neely, 2006). Fourth, children have been found to differentiate between models of varying degrees of familiarity, preferring more familiar ones (e.g., caregivers at their own day-care center) over less familiar ones (caregivers from other day-care centers; Corriveau, Pasquini, & Harris, 2006). Finally, the best-documented achievement of preschoolers is their ability to track and take into account the varying reliability of different agents. When children first witness two agents one of whom proves reliable in naming familiar objects while the other proves unreliable, and then can choose between the two agents in learning novel words for novel objects, 4-year-olds (and sometimes 3-year-olds) prefer the previously reliable agent (Clément, Koenig, & Harris, 2004; Jaswal & Neely, 2006; Koenig & Harris, 2005b; Koenig, Clément, & Harris, 2004; Pasquini, Corriveau, Koenig, & Harris, 2007). What becomes clear from this line of research is that young preschoolers differentiate between models and tend to prefer reliable, adult, confident and knowledgeable models over unreliable, peer, unconfident and ignorant models when learning novel words. But it is not totally clear what this preference indicates: Do children think that one model is more competent and knows the correct answer to culturally relevant questions? Or is their preference – though prompted by the models’ indications of competence and knowledge – simpler, such that they merely like one model more and thus prefer to follow her? In other words: Do the indications of competence make the model simply H. Rakoczy et al. / Cognitive Development 24 (2009) 61–69 63 more attractive to the child? While the latter possibility does not seem highly plausible on the face of it, arguably it cannot be ruled out, in particular in light of the findings with regard to model familiarity: Given that children show a similar pattern of preference for familiar over unfamiliar models, this might put into question the claim that the preference for the reliable over unreliable models is in fact based on estimations of competence (versus differential sympathy). In sum, then, preschool children have been shown to be cultural normative learners: Not only do they learn through imitation, but they learn from adult models normatively structured forms of action – how one performs them correctly. (This becomes clearest in children’s protest against third party mistakes.) But we do not know how systematic and specific such normative learning is. Yet preschoolers have been shown to be systematic and selective in their learning of words from others, preferring for example reliable over unreliable models. Several questions, however, remain unanswered. First, we do not yet know exactly what this selectivity is based on. Second, we do not know how general this selectivity is – virtually all existing studies so far have looked at linguistic learning only (of sortals or labels of object function; the sole exceptions are studies by Koenig and Harris (2005b; Study 3) and by Birch, Vauthier, and Bloom (in press) that looked at object functions and allowed children to answer verbally or by re-enacting a demonstrated function. Third, and in particular, we do not know yet whether children view the way of doing something they selectively imitate from reliable models in normative terms – as the appropriate way to do it. The present work, therefore, aims at addressing these questions by bringing together the two lines of inquiry on children’s selective learning and on their normative learning. Toward this aim, children’s selective acquisition of normatively structured activities (beyond only linguistic learning) from differentially reliable models was studied. Pilot work suggested that selective learning extends beyond the domain of word learning and could be found on a comparable scale in the domain of playing games. When confronted with two characters (one of them previously reliable, the other previously unreliable) who played a game in different ways, children at 4 years of age selectively played the game in the way the previously reliable model did. In this pilot work, however, in a second phase, when a third p", "title": "" }, { "docid": "096b061bc841c963b9f484c33c124ecd", "text": "Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition matrix. There are algebraic analogues of this Perron vector in terms of transition probability tensors of higher-order Markov chains. These vectors are nonnegative, have dimension equal to the dimension of the state space, and sum to one and are derived by making an algebraic substitution in the equation for the joint-stationary distribution of a higher-order Markov chains. Here, we present the spacey random walk, a non-Markovian stochastic process whose stationary distribution is given by the tensor eigenvector. The process itself is a vertex-reinforced random walk, and its discrete dynamics are related to a continuous dynamical system. We analyze the convergence properties of these dynamics and discuss numerical methods for computing the stationary distribution. Finally, we provide several applications of the spacey random walk model in population genetics, ranking, and clustering data, and we use the process to analyze taxi trajectory data in New York. This example shows definite non-Markovian structure. 1. Random walks, higher-order Markov chains, and stationary distributions. Random walks and Markov chains are one of the most well-known and studied stochastic processes as well as a common tool in applied mathematics. A random walk on a finite set of states is a process that moves from state to state in a manner that depends only on the last state and an associated set of transition probabilities from that state to the other states. Here is one such example, with a few sample trajectories of transitions:", "title": "" }, { "docid": "be009b972c794d01061c4ebdb38cc720", "text": "The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.", "title": "" }, { "docid": "e2a6b7730198cbea992947a8d2814ba8", "text": "Some individuals have a greater capacity than others to carry out sophisticated information processing about emotions and emotion-relevant stimuli and to use this information as a guide to thinking and behavior. The authors have termed this set of abilities emotional intelligence (EI). Since the introduction of the concept, however, a schism has developed in which some researchers focus on EI as a distinct group of mental abilities, and other researchers instead study an eclectic mix of positive traits such as happiness, self-esteem, and optimism. Clarifying what EI is and is not can help the field by better distinguishing research that is truly pertinent to EI from research that is not. EI--conceptualized as an ability--is an important variable both conceptually and empirically, and it shows incremental validity for predicting socially relevant outcomes.", "title": "" }, { "docid": "287d1e603f7d677cff93aa0601a9bfef", "text": "Frameworks are an object-oriented reuse technique that are widely used in industry but not discussed much by the software engineering research community. They are a way of reusing design that is part of the reason that some object-oriented developers are so productive. This paper compares and contrasts frameworks with other reuse techniques, and describes how to use them, how to evaluate them, and how to develop them. It describe the tradeo s involved in using frameworks, including the costs and pitfalls, and when frameworks are appropriate.", "title": "" }, { "docid": "b15f185258caa9d355fae140a41ae03c", "text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user", "title": "" }, { "docid": "31f4f0348a4c210ed4d67ede9e1d7b53", "text": "To minimize the amount of data-shuffling I/O that occurs between the pipeline stages of a distributed data-parallel program, its procedural code must be optimized with full awareness of the pipeline that it executes in. Unfortunately, neither pipeline optimizers nor traditional compilers examine both the pipeline and procedural code of a data-parallel program so programmers must either hand-optimize their program across pipeline stages or live with poor performance. To resolve this tension between performance and programmability, this paper describes PeriSCOPE, which automatically optimizes a data-parallel program's procedural code in the context of data flow that is reconstructed from the program's pipeline topology. Such optimizations eliminate unnecessary code and data, perform early data filtering, and calculate small derived values (e.g., predicates) earlier in the pipeline, so that less data - sometimes much less data - is transferred between pipeline stages. PeriSCOPE further leverages symbolic execution to enlarge the scope of such optimizations by eliminating dead code. We describe how PeriSCOPE is implemented and evaluate its effectiveness on real production jobs.", "title": "" }, { "docid": "d8b0ef94385d1379baeb499622253a02", "text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.", "title": "" }, { "docid": "2409f9a37398dbff4306930280c76e81", "text": "OBJECTIVES\nThe dose-response relationship for hand-transmitted vibration has been investigated extensively in temperate environments. Since the clinical features of hand-arm vibration syndrome (HAVS) differ between the temperate and tropical environment, we conducted this study to investigate the dose-response relationship of HAVS in a tropical environment.\n\n\nMETHODS\nA total of 173 male construction, forestry and automobile manufacturing plant workers in Malaysia were recruited into this study between August 2011 and 2012. The participants were interviewed for history of vibration exposure and HAVS symptoms, followed by hand functions evaluation and vibration measurement. Three types of vibration doses-lifetime vibration dose (LVD), total operating time (TOT) and cumulative exposure index (CEI)-were calculated and its log values were regressed against the symptoms of HAVS. The correlation between each vibration exposure dose and the hand function evaluation results was obtained.\n\n\nRESULTS\nThe adjusted prevalence ratio for finger tingling and numbness was 3.34 (95% CI 1.27 to 8.98) for subjects with lnLVD≥20 ln m(2) s(-4) against those <16 ln m(2) s(-4). Similar dose-response pattern was found for CEI but not for TOT. No subject reported white finger. The prevalence of finger coldness did not increase with any of the vibration doses. Vibrotactile perception thresholds correlated moderately with lnLVD and lnCEI.\n\n\nCONCLUSIONS\nThe dose-response relationship of HAVS in a tropical environment is valid for finger tingling and numbness. The LVD and CEI are more useful than TOT when evaluating the dose-response pattern of a heterogeneous group of vibratory tools workers.", "title": "" }, { "docid": "35c8c5f950123154f4445b6c6b2399c2", "text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.", "title": "" }, { "docid": "33c113db245fb36c3ce8304be9909be6", "text": "Bring Your Own Device (BYOD) is growing in popularity. In fact, this inevitable and unstoppable trend poses new security risks and challenges to control and manage corporate networks and data. BYOD may be infected by viruses, spyware or malware that gain access to sensitive data. This unwanted access led to the disclosure of information, modify access policy, disruption of service, loss of productivity, financial issues, and legal implications. This paper provides a review of existing literature concerning the access control and management issues, with a focus on recent trends in the use of BYOD. This article provides an overview of existing research articles which involve access control and management issues, which constitute of the recent rise of usage of BYOD devices. This review explores a broad area concerning information security research, ranging from management to technical solution of access control in BYOD. The main aim for this is to investigate the most recent trends touching on the access control issues in BYOD concerning information security and also to analyze the essential and comprehensive requirements needed to develop an access control framework in the future. Keywords— Bring Your Own Device, BYOD, access control, policy, security.", "title": "" }, { "docid": "aa341b32d0b1e2d2e50770d1c2653ae4", "text": "Derived from two theoretical concepts—situation strength and trait activation—we develop and test an interactionist model governing the degree to which five-factor model personality traits are related to job performance. One concept—situation strength—was hypothesized to predict the validities of all of the “Big Five” traits, while the effects of the other—trait activation—were hypothesized to be specific to each trait. Based on this interactionist model, personality–performance correlations were located in the literature, and occupationally homogeneous jobs were coded according to their theoretically relevant contextual properties. Results revealed that all five traits were more predictive of performance for jobs in which the process by which the work was done represented weak situations (e.g., work was unstructured, employee had discretion to make decisions). Many of the traits also predicted performance in job contexts that activated specific traits (e.g., extraversion better predicted performance in jobs requiring social skills, agreeableness was less positively related to performance in competitive contexts, openness was more strongly related to performance in jobs with strong innovation/ creativity requirements). Overall, the study’s findings supported our interactionist model in which the situation exerts both general and specific effects on the degree to which personality predicts job performance.", "title": "" }, { "docid": "0f7c98d1071d95ef537d5534f994f435", "text": "Zhaohui Xue 1,*, Peijun Du 2,3,4, Hongjun Su 1 and Shaoguang Zhou 1 1 School of Earth Sciences and Engineering, Hohai University, Nanjing 211100, China; hjsu1@163.com (H.S.); zhousg1966@126.com (S.Z.) 2 Key Laboratory for Satellite Mapping Technology and Applications of National Administration of Surveying, Mapping and Geoinformation of China, Nanjing University, Nanjing 210023, China; dupjrs@gmail.com 3 Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210023, China 4 Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing University, Nanjing 210023, China * Correspondence: zhaohui.xue@hhu.edu.cn", "title": "" }, { "docid": "97b9d8dd21dfbb68cf72ad2f03b1a98a", "text": "The explosive increase and ubiquitous accessibility of visual data on the Web have led to the prosperity of research activity in image search or retrieval. With the ignorance of visual content as a ranking clue, methods with text search techniques for visual retrieval may suffer inconsistency between the text words and visual content. Content-based image retrieval (CBIR), which makes use of the representation of visual content to identify relevant images, has attracted sustained attention in recent two decades. Such a problem is challenging due to the intention gap and the semantic gap problems. Numerous techniques have been developed for content-based image retrieval in the last decade. The purpose of this paper is to categorize and evaluate those algorithms proposed during the period of 2003 to 2016. We conclude with several promising directions for future research.", "title": "" }, { "docid": "cd5bba994df6d2d0b30d7249be81dc24", "text": "In many classifier systems, the classifier strength parameter serves as a predictor of future payoff and as the classifier's fitness for the genetic algorithm. We investigate a classifier system, XCS, in which each classifier maintains a prediction of expected payoff, but the classifier's fitness is given by a measure of the prediction's accuracy. The system executes the genetic algorithm in niches defined by the match sets, instead of panmictically. These aspects of XCS result in its population tending to form a complete and accurate mapping X A P from inputs and actions to payoff predictions. Further, XCS tends to evolve classifiers that are maximally general, subject to an accuracy criterion. Besides introducing a new direction for classifier system research, these properties of XCS make it suitable for a wide range of reinforcement learning situations where generalization over states is desirable.", "title": "" }, { "docid": "df99d221aa2f31f03a059106991a1728", "text": "With the advancement of mobile computing technology and cloud-based streaming music service, user-centered music retrieval has become increasingly important. User-specific information has a fundamental impact on personal music preferences and interests. However, existing research pays little attention to the modeling and integration of user-specific information in music retrieval algorithms/models to facilitate music search. In this paper, we propose a novel model, named User-Information-Aware Music Interest Topic (UIA-MIT) model. The model is able to effectively capture the influence of user-specific information on music preferences, and further associate users' music preferences and search terms under the same latent space. Based on this model, a user information aware retrieval system is developed, which can search and re-rank the results based on age- and/or gender-specific music preferences. A comprehensive experimental study demonstrates that our methods can significantly improve the search accuracy over existing text-based music retrieval methods.", "title": "" }, { "docid": "c1698ed38c532d9e9b530f864d6faeae", "text": "Action languages are formal models of parts of natural language that are designed to describe effects of actions. Many of these languages can be viewed as high level notations of answer set programs structured to represent transition systems. However, the form of answer set programs considered in the earlier work is quite limited in comparison with the modern Answer Set Programming (ASP) language, which allows several useful constructs for knowledge representation, such as choice rules, aggregates, and abstract constraint atoms. We propose a new action language called BC+, which closes the gap between action languages and the modern ASP language. The main idea is to define the semantics of BC+ in terms of general stable model semantics for propositional formulas, under which many modern ASP language constructs can be identified with shorthands for propositional formulas. Language BC+ turns out to be sufficiently expressive to encompass the best features of other action languages, such as languages B, C, C+, and BC. Computational methods available in ASP solvers are readily applicable to compute BC+, which led to an implementation of the language by extending system CPLUS2ASP.", "title": "" }, { "docid": "cab97e23b7aa291709ecf18e29f580cf", "text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.", "title": "" }, { "docid": "ea278850f00c703bdd73957c3f7a71ce", "text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.", "title": "" } ]
scidocsrr
f42988037bcb040a0661526bb355d13e
Generic Advice: On the Combination of AOP with Generative Programming in AspectC++
[ { "docid": "47929b2ff4aa29bf115a6728173feed7", "text": "This paper presents a metaobject protocol (MOP) for C++. This MOP was designed to bring the power of meta-programming to C++ programmers. It avoids penalties on runtime performance by adopting a new meta-architecture in which the metaobjects control the compilation of programs instead of being active during program execution. This allows the MOP to be used to implement libraries of efficient, transparent language extensions.", "title": "" } ]
[ { "docid": "e5b9c4594c374d6bf05594d0bda38309", "text": "An instance I of the Hospitals / Residents problem (HR) [6, 7, 15] involves a set R = {r1, . . . , rn} of residents and a set H = {h1, . . . , hm} of hospitals. Each hospital hj ∈ H has a positive integral capacity, denoted by cj . Also, each resident ri ∈ R has a preference list in which he ranks in strict order a subset of H. A pair (ri, hj) ∈ R ×H is said to be acceptable if hj appears in ri’s preference list; in this case ri is said to find hj acceptable. Similarly each hospital hj ∈ H has a preference list in which it ranks in strict order those residents who find hj acceptable. Given any three agents x, y, z ∈ R ∪ H, x is said to prefer y to z if x finds each of y and z acceptable, and y precedes z on x’s preference list. Let C = ∑ hj∈H cj . Let A denote the set of acceptable pairs in I, and let L = |A|. An assignment M is a subset of A. If (ri, hj) ∈ M , ri is said to be assigned to hj , and hj is assigned ri. For each q ∈ R ∪ H, the set of assignees of q in M is denoted by M(q). If ri ∈ R and M(ri) = ∅, ri is said to be unassigned, otherwise ri is assigned. Similarly, any hospital hj ∈ H is under-subscribed, full or over-subscribed according as |M(hj)| is less than, equal to, or greater than cj , respectively. A matching M is an assignment such that |M(ri)| ≤ 1 for each ri ∈ R and |M(hj)| ≤ cj for each hj ∈ H (i.e., no resident is assigned to an unacceptable hospital, each resident is assigned to at most one hospital, and no hospital is over-subscribed). For notational convenience, given a matching M and a resident ri ∈ R such that M(ri) 6= ∅, where there is no ambiguity the notation M(ri) is also used to refer to the single member of M(ri). A pair (ri, hj) ∈ A\\M blocks a matching M , or is a blocking pair for M , if the following conditions are satisfied relative to M :", "title": "" }, { "docid": "0611a30b71d83e118bdd25d86eb20fee", "text": "RNNs have been shown to be excellent models for sequential data and in particular for data that is generated by users in an session-based manner. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce novel ranking loss functions tailored to RNNs in the recommendation setting. The improved performance of these losses over alternatives, along with further tricks and refinements described in this work, allow for an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 53% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly. We further demonstrate the performance gain of the RNN over baselines in an online A/B test.", "title": "" }, { "docid": "b42b17131236abc1ee3066905025aa8c", "text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.", "title": "" }, { "docid": "b5d9b92c127cea5c2d27ebb9b83a93f5", "text": "Random walk graph kernel has been used as an important tool for various data mining tasks including classification and similarity computation. Despite its usefulness, however, it suffers from the expensive computational cost which is at least O(n) or O(m) for graphs with n nodes and m edges. In this paper, we propose Ark, a set of fast algorithms for random walk graph kernel computation. Ark is based on the observation that real graphs have much lower intrinsic ranks, compared with the orders of the graphs. Ark exploits the low rank structure to quickly compute random walk graph kernels in O(n) or O(m) time. Experimental results show that our method is up to 97,865× faster than the existing algorithms, while providing more than 91.3% of the accuracies.", "title": "" }, { "docid": "9badb6e864118f1782d86486f6df9ff3", "text": "The genera Opechona Looss and Prodistomum Linton are redefined: the latter is re-established, its diagnostic character being the lack of a uroproct. Pharyngora Lebour and Neopechona Stunkard are considered synonyms of Opechona, and Acanthocolpoides Travassos, Freitas & Bührnheim is considered a synonym of Prodistomum. Opechona bacillaris (Molin) and Prodistomum [originally Distomum] polonii (Molin) n. comb. are described from the NE Atlantic Ocean. Separate revisions with keys to Opechona, Prodistomum and ‘Opechona-like’ species incertae sedis are presented. Opechona is considered to contain: O. bacillaris (type-species), O. alaskensis Ward & Fillingham, O. [originally Neopechona] cablei (Stunkard) n. comb., O. chloroscombri Nahhas & Cable, O. occidentalis Montgomery, O. parvasoma Ching sp. inq., O. pharyngodactyla Manter, O. [originally Distomum] pyriforme (Linton) n. comb. and O. sebastodis (Yamaguti). Prodistomum includes: P. gracile Linton (type-species), P. [originally Opechona] girellae (Yamaguti) n. comb., P. [originally Opechona] hynnodi (Yamaguti) n. comb., P. [originally Opechona] menidiae (Manter) n. comb., P. [originally Pharyngora] orientalis (Layman) n. comb., P. polonii and P. [originally Opechona] waltairensis (Madhavi) n. comb. Some species are considered ‘Opechona-like’ species incertae sedis: O. formiae Oshmarin, O. siddiqii Ahmad, 1986 nec 1984, O. mohsini Ahmad, O. magnatestis Gaevskaya & Kovaleva, O. vinodae Ahmad, O. travassosi Ahmad, ‘Lepidapedon’ nelsoni Gupta & Mehrotra and O. siddiqi Ahmad, 1984 nec 1986. The related genera Cephalolepidapedon Yamaguti and Clavogalea Bray and the synonymies of their constituent species are discussed, and further comments are made on related genera and misplaced species. The new combination Clavogalea [originally Stephanostomum] trachinoti (Fischthal & Thomas) is made. The taxonomy, life-history, host-specificity and zoogeography of the genera are briefly discussed.", "title": "" }, { "docid": "f7535a097b65dccf1ee8e615244d98c5", "text": "Wireless power transfer via magnetic resonant coupling is experimentally demonstrated in a system with a large source coil and either one or two small receivers. Resonance between source and load coils is achieved with lumped capacitors terminating the coils. A circuit model is developed to describe the system with a single receiver, and extended to describe the system with two receivers. With parameter values chosen to obtain good fits, the circuit models yield transfer frequency responses that are in good agreement with experimental measurements over a range of frequencies that span the resonance. Resonant frequency splitting is observed experimentally and described theoretically for the multiple receiver system. In the single receiver system at resonance, more than 50% of the power that is supplied by the actual source is delivered to the load. In a multiple receiver system, a means for tracking frequency shifts and continuously retuning the lumped capacitances that terminate each receiver coil so as to maximize efficiency is a key issue for future work.", "title": "" }, { "docid": "4e35e75d5fc074b1e02f5dded5964c19", "text": "This paper presents a new bidirectional wireless power transfer (WPT) topology using current fed half bridge converter. Generally, WPT topology with current fed converter uses parallel LC resonant tank network in the transmitter side to compensate the reactive power. However, in medium power application this topology suffers a major drawback that the voltage stress in the inverter switches are considerably high due to high reactive power consumed by the loosely coupled coil. In the proposed topology this is mitigated by adding a suitably designed capacitor in series with the transmitter coil. Both during grid to vehicle and vehicle to grid operations the power flow is controlled through variable switching frequency to achieve extended ZVS of the inverter switches. Detail analysis and converter design procedure is presented for both grid to vehicle and vehicle to grid operations. A 1.2kW lab-prototype is developed and experimental results are presented to verify the analysis.", "title": "" }, { "docid": "68ab3b742b2181a6d2e12ccc9ee46612", "text": "BACKGROUND\nLeadership is important in the implementation of innovation in business, health, and allied health care settings. Yet there is a need for empirically validated organizational interventions for coordinated leadership and organizational development strategies to facilitate effective evidence-based practice (EBP) implementation. This paper describes the initial feasibility, acceptability, and perceived utility of the Leadership and Organizational Change for Implementation (LOCI) intervention. A transdisciplinary team of investigators and community stakeholders worked together to develop and test a leadership and organizational strategy to promote effective leadership for implementing EBPs.\n\n\nMETHODS\nParticipants were 12 mental health service team leaders and their staff (n = 100) from three different agencies that provide mental health services to children and families in California, USA. Supervisors were randomly assigned to the 6-month LOCI intervention or to a two-session leadership webinar control condition provided by a well-known leadership training organization. We utilized mixed methods with quantitative surveys and qualitative data collected via surveys and a focus group with LOCI trainees.\n\n\nRESULTS\nQuantitative and qualitative analyses support the LOCI training and organizational strategy intervention in regard to feasibility, acceptability, and perceived utility, as well as impact on leader and supervisee-rated outcomes.\n\n\nCONCLUSIONS\nThe LOCI leadership and organizational change for implementation intervention is a feasible and acceptable strategy that has utility to improve staff-rated leadership for EBP implementation. Further studies are needed to conduct rigorous tests of the proximal and distal impacts of LOCI on leader behaviors, implementation leadership, organizational context, and implementation outcomes. The results of this study suggest that LOCI may be a viable strategy to support organizations in preparing for the implementation and sustainment of EBP.", "title": "" }, { "docid": "fdb23d6b43ef07761d90c3faeaefce5d", "text": "With the advent of big data phenomenon in the world of data and its related technologies, the developments on the NoSQL databases are highly regarded. It has been claimed that these databases outperform their SQL counterparts. The aim of this study is to investigate the claim by evaluating the document-oriented MongoDB database with SQL in terms of the performance of common aggregated and non-aggregate queries. We designed a set of experiments with a huge number of operations such as read, write, delete, and select from various aspects in the two databases and on the same data for a typical e-commerce schema. The results show that MongoDB performs better for most operations excluding some aggregate functions. The results can be a good source for commercial and non-commercial companies eager to change the structure of the database used to provide their line-of-business services.", "title": "" }, { "docid": "1c77370d8a69e83f45ddd314b798f1b1", "text": "The use of networks for communications between the Electronic Control Units (ECU) of a vehicle in production cars dates from the beginning of the 90s. The speci c requirements of the di erent car domains have led to the development of a large number of automotive networks such as LIN, CAN, CAN FD, FlexRay, MOST, automotive Ethernet AVB, etc.. This report rst introduces the context of in-vehicle embedded systems and, in particular, the requirements imposed on the communication systems. Then, a review of the most widely used, as well as the emerging automotive networks is given. Next, the current e orts of the automotive industry on middleware technologies which may be of great help in mastering the heterogeneity, are reviewed, with a special focus on the proposals of the AUTOSAR consortium. Finally, we highlight future trends in the development of automotive communication systems. ∗This technical report is an updated version of two earlier review papers on automotive networks: N. Navet, Y.-Q. Song, F. Simonot-Lion, C. Wilwert, \"Trends in Automotive Communication Systems\", Proceedings of the IEEE, special issue on Industrial Communications Systems, vol 96, no6, pp1204-1223, June 2005 [66]. An updated version of this IEEE Proceedings then appeared as chapter 4 in The Automotive Embedded Systems Handbook in 2008 [62].", "title": "" }, { "docid": "51f66b4ff06999f6ce7df45a1db1d8f7", "text": "Smart homes with advanced building technologies can react to sensor triggers in a variety of preconfigured ways. These rules are usually only visible within designated configuration interfaces. For this reason inhabitants who are not actively involved in the configuration process can be taken by surprise by the effects of such rules, such as for example the unexpected automated actions of lights or shades. To provide these inhabitants with better means to understand their home, as well as to increase their motivation to actively engage with its configuration, we propose Casalendar, a visualization that integrates the status of smart home technologies into the familiar interface of a calendar. We present our design and initial findings about the application of a temporal metaphor in smart home interfaces.", "title": "" }, { "docid": "2d0d42a6c712d93ace0bf37ffe786a75", "text": "Personalized search systems tailor search results to the current user intent using historic search interactions. This relies on being able to find pertinent information in that user's search history, which can be challenging for unseen queries and for new search scenarios. Building richer models of users' current and historic search tasks can help improve the likelihood of finding relevant content and enhance the relevance and coverage of personalization methods. The task-based approach can be applied to the current user's search history, or as we focus on here, all users' search histories as so-called \"groupization\" (a variant of personalization whereby other users' profiles can be used to personalize the search experience). We describe a method whereby we mine historic search-engine logs to find other users performing similar tasks to the current user and leverage their on-task behavior to identify Web pages to promote in the current ranking. We investigate the effectiveness of this approach versus query-based matching and finding related historic activity from the current user (i.e., group versus individual). As part of our studies we also explore the use of the on-task behavior of particular user cohorts, such as people who are expert in the topic currently being searched, rather than all other users. Our approach yields promising gains in retrieval performance, and has direct implications for improving personalization in search systems.", "title": "" }, { "docid": "34257e8924d8f9deec3171589b0b86f2", "text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.", "title": "" }, { "docid": "8b66accd5009666b233e90215b12dea4", "text": "Crowded streets are a major problem in large cities. A large part of the problem stems from drivers seeking on-street parking. Cities such as San Francisco, Los Angeles and Seattle have tackled this problem with smart parking systems that aim to maintain the on-street parking occupancy rates around a target level, thus ensuring that empty spots are spread across the city rather than clustered in a single area. In this study, we use the San Francisco's SFpark system as a case study. Specifically, in each given parking area, the SFpark uses occupancy rate data from the previous month to adjust the price in the current month. Instead, we propose a machine learning approach that predicts the occupancy rate of a parking area based on past occupancy rates and prices from an entire neighborhood (which covers many parking areas). We further formulate an optimization problem for the prices in each parking area that minimize the root mean squared error (RMSE) between the predicted occupancy rates of all areas in the neighborhood and the target occupancy rates. This approach is novel in that 1) it responds to a predicted level of occupancy rate rather than past data and 2) it find prices that optimize the total occupancy rate of all neighborhoods, taking under account that prices in one area can impact the demand in adjacent areas. We conduct a numerical study, using data collected from the SFpark study, that shows that the prices obtained from our optimization lead to occupancy rates that are very close to the desired target level.", "title": "" }, { "docid": "1b7b64bd6c51a2a81c112a43ff10bb86", "text": "We propose techniques for decentralizing prediction markets and order books, utilizing Bitcoin’s security model and consensus mechanism. Decentralization of prediction markets offers several key advantages over a centralized market: no single entity governs over the market, all transactions are transparent in the block chain, and anybody can participate pseudonymously to either open a new market or place bets in an existing one. We provide trust agility: each market has its own specified arbiter and users can choose to interact in markets that rely on the arbiters they trust. We also provide a transparent, decentralized order book that enables order execution on the block chain in the presence of potentially malicious miners. 1 Introductory Remarks Bitcoin has demonstrated that achieving consensus in a decentralized network is practical. This has stimulated research on applying Bitcoin-esque consensus mechanisms to new applications (e.g., DNS through Namecoin, timestamping through CommitCoin [10], and smart contracts through Ethereum). In this paper, we consider application of Bitcoin’s principles to prediction markets. A prediction market (PM) enables forecasts about uncertain future events to be forged into financial instruments that can be traded (bought, sold, shorted, etc.) until the uncertainty of the event is resolved. In several common forecasting scenarios, PMs have demonstrated lower error than polls, expert opinions, and statistical inference [2]. Thus an open and transparent PM not only serves its traders, it serves any stakeholder in the outcome by providing useful forecasting information through prices. Whenever discussing the application of Bitcoin to a new technology or service, its important to distinguish exactly what is meant. For example, a “Bitcoin-based prediction market” could mean at least three different things: (1) adding Bitcoin-related contracts (e.g., the future Bitcoin/USD exchange rate) to a traditional centralized PM, (2) converting the underlying currency of a centralized prediction market to Bitcoin, or (3) applying the design principles of Bitcoin to decentralize the functionality and governance of a PM. Of the three interpretations, approach (1) is not a research contribution. Approach (2) inherits most of the properties of a traditional PM: Opening markets for new future events is subject to a commitment by the PM host to determine the outcome, virtually any trading rules can be implemented, and trade settlement and clearing can be automated if money is held in trading accounts. In addition, by denominating the PM in Bitcoin, approach (2) enables easy electronic deposits and withdrawals from trading accounts, and can add a level of anonymity. An example of approach (2) is Predictious. This set of properties is a desirable starting point but we see several ways it can be improved through approach (3). Thus, our contribution is a novel PM design that enables: • A Decentralized Clearing/Settlement Service. Fully automated settlement and clearing of trades without escrowing funds to a trusted straight through processor (STP). • A Decentralized Order Matching Service. Fully automated matching of orders in a built-in call market, plus full support for external centralized exchanges. 4 http://namecoin.info 5 http://www.ethereum.org 6 https://www.predictious.com • Self-Organized Markets. Any participant can solicit forecasts on any event by arranging for any entity (or group of entities) to arbitrate the final payout based on the event’s outcome. • Agile Arbitration. Anyone can serve as an arbiter, and arbiters only need to sign two transactions (an agreement to serve and a declaration of an outcome) keeping the barrier to entry low. Traders can choose to participate in markets with arbiters they trust. Our analogue of Bitcoin miners can also arbitrate. • Transparency by Design. All trades, open markets, and arbitrated outcomes are reflected in a public ledger akin to Bitcoin’s block chain. • Flexible Fees. Fees paid to various parties can be configured on a per-market basis, with levels driven by market conditions (e.g., the minimum to incentivize correct behavior). • Resilience. Disruption to sets of participants will not disrupt the operations of the PM. • Theft Resistance. Like Bitcoin, currency and PM shares are held by the traders, and no transfers are possible without the holder’s digital signature. However like Bitcoin, users must protect their private keys and understand the risks of keeping money on an exchange service. • Pseudonymous Transactions. Like Bitcoin, holders of funds and shares are only identified with a pseudonymous public key, and any individual can hold an arbitrary number of keys. 2 Preliminaries and Related Work 2.1 Prediction Markets A PM enables participants to trade financial shares tied to the outcome of a specified future event. For example, if Alice, Bob, and Charlie are running for president, a share in ‘the outcome of Alice winning’ might entitle its holder to $1 if Alice wins and $0 if she does not. If the participants believed Alice to have a 60% chance of winning, the share would have an expected value of $0.60. In the opposite direction, if Bob and Charlie are trading at $0.30 and $0.10 respectively, the market on aggregate believes their likelihood of winning to be 30% and 10%. One of the most useful innovations of PMs is the intuitiveness of this pricing function [24]. Amateur traders and market observers can quickly assess current market belief, as well as monitor how forecasts change over time. The economic literature provides evidence that PMs can forecast certain types of events more accurately than methods that do not use financial incentives, such as polls (see [2] for an authoritative summary). They have been deployed internally by organizations such as the US Department of Defense, Microsoft, Google, IBM, and Intel, to forecast things like national security threats, natural disasters, and product development time and cost [2]. The literature on PMs tends to focus on topics orthogonal to how PMs are technically deployed, such as market scoring rules for market makers [13,9], accuracy of forecasts [23], and the relationship between share price and market belief [24]. Concurrently with the review of our paper, a decentralized PM called Truthcoin was independently proposed. It is also a Bitcoin-based design, however it focuses on determining a voting mechanism that incentivizes currency holders to judge the outcome of all events. We argue for designated arbitration in Section 5.1. Additionally, our approach does not use a market maker and is based on asset trading through a decentralized order book.", "title": "" }, { "docid": "5f9a122e8748d375b8b7bac838829b06", "text": "We analyzed heart rate variability (HRV) taken by ECG and photoplethysmography (PPG) to assess their agreement. We also analyzed the sensitivity and specificity of PPG to identify subjects with low HRV as an example of its potential use for clinical applications. The HRV parameters: mean heart rate (HR), amplitude, and ratio of heart rate oscillation (E–I difference, E/I ratio), RMSSD, SDNN, and Power LF, were measured during 1-min deep breathing tests (DBT) in 343 individuals, followed by a 5-min short-term HRV (s-HRV), where the HRV parameters: HR, SD1, SD2, SDNN, Stress Index, Power HF, Power LF, Power VLF, and Total Power, were determined as well. Parameters were compared through correlation analysis and agreement analysis by Bland–Altman plots. PPG derived parameters HR and SD2 in s-HRV showed better agreement than SD1, Power HF, and stress index, whereas in DBT HR, E/I ratio and SDNN were superior to Power LF and RMSSD. DBT yielded stronger agreement than s-HRV. A slight overestimation of PPG HRV over HCG HRV was found. HR, Total Power, and SD2 in the s-HRV, HR, Power LF, and SDNN in the DBT showed high sensitivity and specificity to detect individuals with poor HRV. Cutoff percentiles are given for the future development of PPG-based devices. HRV measured by PPG shows good agreement with ECG HRV when appropriate parameters are used, and PPG-based devices can be employed as an easy screening tool to detect individuals with poor HRV, especially in the 1-min DBT test.", "title": "" }, { "docid": "12968fe21e294aedc1c953e380bb4b9b", "text": "Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO.", "title": "" }, { "docid": "e5e4349bb677bb128dcf1385c34cdf41", "text": "The occurrence of eight phosphorus flame retardants (PFRs) was investigated in 53 composite food samples from 12 food categories, collected in 2015 for a Swedish food market basket study. 2-ethylhexyl diphenyl phosphate (EHDPHP), detected in most food categories, had the highest median concentrations (9 ng/g ww, pastries). It was followed by triphenyl phosphate (TPHP) (2.6 ng/g ww, fats/oils), tris(1,3-dichloro-2-propyl) phosphate (TDCIPP) (1.0 ng/g ww, fats/oils), tris(2-chloroethyl) phosphate (TCEP) (1.0 ng/g ww, fats/oils), and tris(1-chloro-2-propyl) phosphate (TCIPP) (0.80 ng/g ww, pastries). Tris(2-ethylhexyl) phosphate (TEHP), tri-n-butyl phosphate (TNBP), and tris(2-butoxyethyl) phosphate (TBOEP) were not detected in the analyzed food samples. The major contributor to the total dietary intake was EHDPHP (57%), and the food categories which contributed the most to the total intake of PFRs were processed food, such as cereals (26%), pastries (10%), sugar/sweets (11%), and beverages (17%). The daily per capita intake of PFRs (TCEP, TPHP, EHDPHP, TDCIPP, TCIPP) from food ranged from 406 to 3266 ng/day (or 6-49 ng/kg bw/day), lower than the health-based reference doses. This is the first study reporting PFR intakes from other food categories than fish (here accounting for 3%). Our results suggest that the estimated human dietary exposure to PFRs may be equally important to the ingestion of dust.", "title": "" }, { "docid": "f72160ed6188424481fecbf4cb7ee31a", "text": "AIMS AND OBJECTIVES\nThe aim of this study was to identify factors that influence nurse's decisions to question concerning aspects of medication administration within the context of a neonatal clinical care unit.\n\n\nBACKGROUND\nMedication error in the neonatal setting can be high with this particularly vulnerable population. As the care giver responsible for medication administration, nurses are deemed accountable for most errors. However, they are recognised as the forefront of prevention. Minimal evidence is available around reasoning, decision making and questioning around medication administration. Therefore, this study focuses upon addressing the gap in knowledge around what nurses believe influences their decision to question.\n\n\nDESIGN\nA critical incident design was employed where nurses were asked to describe clinical incidents around their decision to question a medication issue. Nurses were recruited from a neonatal clinical care unit and participated in an individual digitally recorded interview.\n\n\nRESULTS\nOne hundred and three nurses participated between December 2013-August 2014. Use of the constant comparative method revealed commonalities within transcripts. Thirty-six categories were grouped into three major themes: 'Working environment', 'Doing the right thing' and 'Knowledge about medications'.\n\n\nCONCLUSIONS\nFindings highlight factors that influence nurses' decision to question issues around medication administration. Nurses feel it is their responsibility to do the right thing and speak up for their vulnerable patients to enhance patient safety. Negative dimensions within the themes will inform planning of educational strategies to improve patient safety, whereas positive dimensions must be reinforced within the multidisciplinary team.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe working environment must support nurses to question and ultimately provide safe patient care. Clear and up to date policies, formal and informal education, role modelling by senior nurses, effective use of communication skills and a team approach can facilitate nurses to appropriately question aspects around medication administration.", "title": "" }, { "docid": "0e4754e2b81c6a0b16921fcff55370ed", "text": "Lifestyle factors, including nutrition, play an important role in the etiology of Cardiovascular Disease (CVD). This position paper, written by collaboration between the Israel Heart Association and the Israel Dietetic Association, summarizes the current, preferably latest, literature on the association of nutrition and CVD with emphasis on the level of evidence and practical recommendations. The nutritional information is divided into three main sections: dietary patterns, individual food items, and nutritional supplements. The dietary patterns reviewed include low carbohydrate diet, low-fat diet, Mediterranean diet, and the DASH diet. Foods reviewed in the second section include: whole grains and dietary fiber, vegetables and fruits, nuts, soy, dairy products, alcoholic drinks, coffee and caffeine, tea, chocolate, garlic, and eggs. Supplements reviewed in the third section include salt and sodium, omega-3 and fish oil, phytosterols, antioxidants, vitamin D, magnesium, homocysteine-reducing agents, and coenzyme Q10.", "title": "" } ]
scidocsrr
c7b1fe4b35cab039b2d769ee841cba87
Building Virtual and Augmented Reality museum exhibitions
[ { "docid": "758def2083055b147d19b99280e5c8d2", "text": "We present the Virtual Showcase, a new multiviewer augmented reality display device that has the same form factor as a real showcase traditionally used for museum exhibits.", "title": "" } ]
[ { "docid": "c5b5dad34b50d061b0394ed80fcd3252", "text": "Crowdfunding provides a new opportunity for entrepre-neurs to launch ventures without having to rely on traditional funding mechanisms, such as banks and angel investing. Despite its rapid growth, we understand little about how crowdfunding users build ad hoc online communities to undertake this new way of performing entrepreneurial work. To better understand this phenomenon, we performed a qualitative study of 47 entrepreneurs who use crowdfunding platforms to raise funds for their projects. We identify community efforts to support crowdfunding work, such as providing mentorship to novices, giving feedback on campaign presentation, and building a repository of example projects to serve as models. We also identify where community efforts and technologies succeed and fail at support-ing the work in order to inform the design of crowdfunding support tools and systems.", "title": "" }, { "docid": "838e6c58f3bb7a0b8350d12d45813b5a", "text": "Heterogeneous networks not only present a challenge of heterogeneity in the types of nodes and relations, but also the attributes and content associated with the nodes. While recent works have looked at representation learning on homogeneous and heterogeneous networks, there is no work that has collectively addressed the following challenges: (a) the heterogeneous structural information of the network consisting of multiple types of nodes and relations; (b) the unstructured semantic content (e.g., text) associated with nodes; and (c) online updates due to incoming new nodes in growing network. We address these challenges by developing a Content-Aware Representation Learning model (CARL). CARL performs joint optimization of heterogeneous SkipGram and deep semantic encoding for capturing both heterogeneous structural closeness and unstructured semantic relations among all nodes, as function of node content, that exist in the network. Furthermore, an additional online update module is proposed for efficiently learning representations of incoming nodes. Extensive experiments demonstrate that CARL outperforms state-of-the-art baselines in various heterogeneous network mining tasks, such as link prediction, document retrieval, node recommendation and relevance search. We also demonstrate the effectiveness of the CARL’s online update module through a category visualization study.", "title": "" }, { "docid": "c398cdaf29b576a43150496bb0732444", "text": "While many high-quality tools are available for analyzing major languages such as English, equivalent freely-available tools for important but lower-resourced languages such as Farsi are more difficult to acquire and integrate into a useful NLP front end. We report here on an accurate and efficient Farsi analysis front end that we have assembled, which may be useful to others who wish to work with written Farsi. The pre-existing components and resources that we incorporated include the Carnegie Mellon TurboParser and TurboTagger (Martins et al., 2010) trained on the Dadegan Treebank (Rasooli et al., 2013), the Uppsala Farsi text normalizer PrePer (Seraji, 2013), the Uppsala Farsi tokenizer (Seraji et al., 2012a), and Jon Dehdari’s PerStem (Jadidinejad et al., 2010). This set of tools (combined with additional normalization and tokenization modules that we have developed and made available) achieves a dependency parsing labeled attachment score of 89.49%, unlabeled attachment score of 92.19%, and label accuracy score of 91.38% on a held-out parsing test data set. All of the components and resources used are freely available. In addition to describing the components and resources, we also explain the rationale for our choices.", "title": "" }, { "docid": "971227f276624394bf87678186d99e2d", "text": "Some of the most challenging issues in data outsourcing scenario are the enforcement of authorization policies and the support of policy updates. Ciphertext-policy attribute-based encryption is a promising cryptographic solution to these issues for enforcing access control policies defined by a data owner on outsourced data. However, the problem of applying the attribute-based encryption in an outsourced architecture introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control mechanism using ciphertext-policy attribute-based encryption to enforce access control policies with efficient attribute and user revocation capability. The fine-grained access control can be achieved by dual encryption mechanism which takes advantage of the attribute-based encryption and selective group key distribution in each attribute group. We demonstrate how to apply the proposed mechanism to securely manage the outsourced data. The analysis results indicate that the proposed scheme is efficient and secure in the data outsourcing systems.", "title": "" }, { "docid": "15753e152898b07fda8807c670127c72", "text": "The increasing influence of social media and enormous participation of users creates new opportunities to study human social behavior along with the capability to analyze large amount of data streams. One of the interesting problems is to distinguish between different kinds of users, for example users who are leaders and introduce new issues and discussions on social media. Furthermore, positive or negative attitudes can also be inferred from those discussions. Such problems require a formal interpretation of social media logs and unit of information that can spread from person to person through the social network. Once the social media data such as user messages are parsed and network relationships are identified, data mining techniques can be applied to group different types of communities. However, the appropriate granularity of user communities and their behavior is hardly captured by existing methods. In this paper, we present a framework for the novel task of detecting communities by clustering messages from large streams of social data. Our framework uses K-Means clustering algorithm along with Genetic algorithm and Optimized Cluster Distance (OCD) method to cluster data. The goal of our proposed framework is twofold that is to overcome the problem of general K-Means for choosing best initial centroids using Genetic algorithm, as well as to maximize the distance between clusters by pairwise clustering using OCD to get an accurate clusters. We used various cluster validation metrics to evaluate the performance of our algorithm. The analysis shows that the proposed method gives better clustering results and provides a novel use-case of grouping user communities based on their activities. Our approach is optimized and scalable for real-time clustering of social media data.", "title": "" }, { "docid": "fc979cd4ce249f0e32b5f887e123cb69", "text": "With the improvement of medical insurance system, the coverage of medicare increases a lot. However, while the expenditure of this system is continuously rising, medicare fraud is causing huge losses for this system. Traditional medicare fraud detection greatly depends on the experience of domain experts, which is not accurate enough and costs much time and labor.In this study, we propose a medicare fraud detection framework based on the technology of anomaly detection. Our method consists of two parts. First part is a spatial density based algorithm, called improved local outlier factor (imLOF), which is more applicable than simple local outlier factor in medical insurance data. Second part is robust regression to depict the linear dependence between variables. Some experiments are conducted on real world data to measure the efficiency of our method.", "title": "" }, { "docid": "d97a32276d54fcf21c26225c7c2d8199", "text": "In deep classification, the softmax loss (Softmax) is arguably one of the most commonly used components to train deep convolutional neural networks (CNNs). However, such a widely used loss is limited due to its lack of encouraging the discriminability of features. Recently, the large-margin softmax loss (L-Softmax [14]) is proposed to explicitly enhance the feature discrimination, with hard margin and complex forward and backward computation. In this paper, we propose a novel soft-margin softmax (SM-Softmax) loss to improve the discriminative power of features. Specifically, SM-Softamx only modifies the forward of Softmax by introducing a non-negative real number m, without changing the backward. Thus it can not only adjust the desired continuous soft margin but also be easily optimized by the typical stochastic gradient descent (SGD). Experimental results on three benchmark datasets have demonstrated the superiority of our SM-Softmax over the baseline Softmax, the alternative L-Softmax and several state-of-the-art competitors.", "title": "" }, { "docid": "73d9461101dc15f93f52d2ab9b8c0f39", "text": "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "5518e4814b9eb0cd90b3563bd33c0ddc", "text": "Most machine-learning methods focus on classifying instances whose classes have already been seen in training. In practice, many applications require classifying instances whose classes have not been seen previously. Zero-shot learning is a powerful and promising learning paradigm, in which the classes covered by training instances and the classes we aim to classify are disjoint. In this paper, we provide a comprehensive survey of zero-shot learning. First of all, we provide an overview of zero-shot learning. According to the data utilized in model optimization, we classify zero-shot learning into three learning settings. Second, we describe different semantic spaces adopted in existing zero-shot learning works. Third, we categorize existing zero-shot learning methods and introduce representative methods under each category. Fourth, we discuss different applications of zero-shot learning. Finally, we highlight promising future research directions of zero-shot learning.", "title": "" }, { "docid": "331df0bd161470558dd5f5061d2b1743", "text": "The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.", "title": "" }, { "docid": "e347eadb8df6386e70171d73388b8ace", "text": "An ultra-large voltage conversion ratio converter is proposed by integrating a switched-capacitor circuit with a coupled inductor technology. The proposed converter can be seen as an equivalent parallel connection to the load of a basic boost converter and a number of forward converters, each one containing a switched-capacitor circuit. All the stages are activated by the boost switch. A single active switch is required, with no need of extreme duty-ratio values. The leakage energy of the coupled inductor is recycled to the load. The inrush current problem of switched capacitors is restrained by the leakage inductance of the coupled-inductor. The above features are the reason for the high efficiency performance. The operating principles and steady state analyses of continuous, discontinuous and boundary conduction modes are discussed in detail. To verify the performance of the proposed converter, a 200 W/20 V to 400 V prototype was implemented. The maximum measured efficiency is 96.4%. The full load efficiency is 95.1%.", "title": "" }, { "docid": "f1ae440f8c29d5f9406aa55ca31e2bb4", "text": "ConceptMapper is an open source tool we created for classifying mentions in an unstructured text document based on concept terminologies and yielding named entities as output. It is implemented as a UIMA1 (Unstructured Information Management Architecture (IBM, 2004)) annotator, and concepts come from standardised or proprietary terminologies. ConceptMapper can be easily configured, for instance, to use different search strategies or syntactic concepts. In this paper we will describe ConceptMapper, its configuration parameters and their trade-offs, in terms of precision and recall in identifying concepts in a collection of clinical reports written in English. ConceptMapper is available from the Apache UIMA Sandbox, using the Apache Open Source license.", "title": "" }, { "docid": "cbbb2c0a9d2895c47c488bed46d8f468", "text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.", "title": "" }, { "docid": "5c2f115e0159d15a87904e52879c1abf", "text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.", "title": "" }, { "docid": "4dc6f5768b43e6c491f0b08600acbea5", "text": "Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.", "title": "" }, { "docid": "6831c633bf7359b8d22296b52a9a60b8", "text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.", "title": "" }, { "docid": "c95da5ee6fde5cf23b551375ff01e709", "text": "The 3GPP has introduced the LTE-M and NB-IoT User Equipment categories and made amendments to LTE release 13 to support the cellular Internet of Things. The contribution of this paper is to analyze the coverage probability, the number of supported devices, and the device battery life in networks equipped with either of the newly standardized technologies. The study is made for a site specific network deployment of a Danish operator, and the simulation is calibrated using drive test measurements. The results show that LTE-M can provide coverage for 99.9 % of outdoor and indoor devices, if the latter is experiencing 10 dB additional loss. However, for deep indoor users NB-IoT is required and provides coverage for about 95 % of the users. The cost is support for more than 10 times fewer devices and a 2-6 times higher device power consumption. Thus both LTE-M and NB- IoT provide extended support for the cellular Internet of Things, but with different trade- offs.", "title": "" } ]
scidocsrr
d79305cc58cf6dfaaca3225e4bad452c
An Empirical Study of Application of Data Mining Techniques in Library System
[ { "docid": "7804d1c4ec379ed47d45917786946b2f", "text": "Data mining technology has been applied to library management. In this paper, Boustead College Library Information Management System in the history of circulation records, the reader information and collections as a data source, using the Microsoft SQL Server 2005 as a data mining tool, applying data mining algorithm as cluster, association rules and time series to identify characteristics of the reader to borrow in order to achieve individual service.", "title": "" }, { "docid": "52fb72d1b6f5384baa76e76aae2eeee0", "text": "Data mining techniques have been successfully applied in stock, insurance, medicine, banking and retailing domains. In the sport domain, for transforming sport data into actionable knowledge, coaches can use data mining techniques to plan training sessions more effectively, and to reduce the impact of testing activity on athletes. This paper presents one such model, which uses clustering techniques, such as improved K-Means, Expectation-Maximization (EM), DBSCAN, COBWEB and hierarchical clustering approaches to analyze sport physiological data collected during incremental tests. Through analyzing the progress of a test session, the authors assign the tested athlete to a group of athletes and evaluate these groups to support the planning of training sessions.", "title": "" } ]
[ { "docid": "2a3af43968d5c254d1ef59925e2b3b64", "text": "Fuzzing is a commonly used technique designed to test software by automatically crafting program inputs. Currently, the most successful fuzzing algorithms emphasize simple, low-overhead strategies with the ability to efficiently monitor program state during execution. Through compile-time instrumentation, these approaches have access to numerous aspects of program state including coverage, data flow, and heterogeneous fault detection and classification. However, existing approaches utilize blind random mutation strategies when generating test inputs. We present a different approach that uses this state information to optimize mutation operators using reinforcement learning (RL). By integrating OpenAI Gym with libFuzzer we are able to simultaneously leverage advancements in reinforcement learning as well as fuzzing to achieve deeper coverage across several varied benchmarks. Our technique connects the rich, efficient program monitors provided by LLVM Santizers with a deep neural net to learn mutation selection strategies directly from the input data. The cross-language, asynchronous architecture we developed enables us to apply any OpenAI Gym compatible deep reinforcement learning algorithm to any fuzzing problem with minimal slowdown.", "title": "" }, { "docid": "b33eaecf2aff15ecb2f0d256bde7e1bb", "text": "This paper presents an objective evaluation of various eye movement-based biometric features and their ability to accurately and precisely distinguish unique individuals. Eye movements are uniquely counterfeit resistant due to the complex neurological interactions and the extraocular muscle properties involved in their generation. Considered biometric candidates cover a number of basic eye movements and their aggregated scanpath characteristics, including: fixation count, average fixation duration, average saccade amplitudes, average saccade velocities, average saccade peak velocities, the velocity waveform, scanpath length, scanpath area, regions of interest, scanpath inflections, the amplitude-duration relationship, the main sequence relationship, and the pairwise distance between fixations. As well, an information fusion method for combining these metrics into a single identification algorithm is presented. With limited testing this method was able to identify subjects with an equal error rate of 27%. These results indicate that scanpath-based biometric identification holds promise as a behavioral biometric technique.", "title": "" }, { "docid": "052121ed86fe268f0a58d6a6c53e342f", "text": "Utilization of power electronics converters in pulsed power applications introduced a new series of reliable, long life, and cost-effective pulse generators. However, these converters suffer from the limited power ratings of semiconductor switches, which necessitate introduction of a new family of modular topologies. This paper proposes a modular power electronics converter based on voltage multiplier as a high voltage pulsed power generator. This modular circuit is able to generate flexible high output voltage from low input voltage sources. Circuit topology and operational principles of proposed topology are verified via experimental and simulation results as well as theoretical analysis.", "title": "" }, { "docid": "375d5fcb41b7fb3a2f60822720608396", "text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.", "title": "" }, { "docid": "18b78d3b94b077c481792d51b73549b0", "text": "In recent years a number of solvers for the direct solution of large sparse symmetric linear systems of equations have been developed. These include solvers that are designed for the solution of positive definite systems as well as those that are principally intended for solving indefinite problems. In this study, we use performance profiles as a tool for evaluating and comparing the performance of serial sparse direct solvers on an extensive set of symmetric test problems taken from a range of practical applications.", "title": "" }, { "docid": "7b9dcf0f5890b216aa71588166a772f2", "text": "This study compared anthropometric (body height, body mass, percent body fat, fat-free body mass) and physical fitness characteristics (vertical jump height, power-load curve of the leg, 5 and 15 m sprint running time and blood lactate concentrations ([La]b) at submaximal running velocities) among 15 elite male indoor soccer (IS) and 25 elite male outdoor soccer (OS) players. IS players had similar values in body height, body mass, fat-free body mass and endurance running than OS players. However, the IS group showed higher (P < 0.05–0.01) values in percent body fat (28%) and sprint running time (2%) but lower values in vertical jump (15%) and half-squat power (20%) than the OS group. Significant negative correlations (P < 0.05–0.01) were observed between maximal sprint running time, power production during half-squat actions, as well as [La]b at submaximal running velocities. Percent body fat correlated positively with maximal sprint time and [La]b, but correlated negatively with vertical jump height. The present results show that compared to elite OS players, elite IS players present clearly lower physical fitness (lower maximal leg extension power production) characteristics associated with higher values of percent body fat. This should give IS players a disadvantage during soccer game actions.", "title": "" }, { "docid": "dbdbdf3df12ef47c778e0e9f4ddfc7d6", "text": "In the recent years, research on speech recognition has given much diligence to the automatic transcription of speech data such as broadcast news (BN), medical transcription, etc. Large Vocabulary Continuous Speech Recognition (LVCSR) systems have been developed successfully for Englishes (American English (AE), British English (BE), etc.) and other languages but in case of Indian English (IE), it is still at infancy stage. IE is one of the varieties of English spoken in Indian subcontinent and is largely different from the English spoken in other parts of the world. In this paper, we have presented our work on LVCSR of IE video lectures. The speech data contains video lectures on various engineering subjects given by the experts from all over India as part of the NPTEL project which comprises of 23 hours. We have used CMU Sphinx for training and decoding in our large vocabulary continuous speech recognition experiments. The results analysis instantiate that building IE acoustic model for IE speech recognition is essential due to the fact that it has given 34% less average word error rate (WER) than HUB-4 acoustic models. The average WER before and after adaptation of IE acoustic model is 38% and 31% respectively. Even though, our IE acoustic model is trained with limited training data and the corpora used for building the language models do not mimic the spoken language, the results are promising and comparable to the results reported for AE lecture recognition in the literature.", "title": "" }, { "docid": "f6dbce178e428522c80743e735920875", "text": "With the recent advancement in deep learning, we have witnessed a great progress in single image super-resolution. However, due to the significant information loss of the image downscaling process, it has become extremely challenging to further advance the state-of-theart, especially for large upscaling factors. This paper explores a new research direction in super resolution, called reference-conditioned superresolution, in which a reference image containing desired high-resolution texture details is provided besides the low-resolution image. We focus on transferring the high-resolution texture from reference images to the super-resolution process without the constraint of content similarity between reference and target images, which is a key difference from previous example-based methods. Inspired by recent work on image stylization, we address the problem via neural texture transfer. We design an end-to-end trainable deep model which generates detail enriched results by adaptively fusing the content from the low-resolution image with the texture patterns from the reference image. We create a benchmark dataset for the general research of reference-based super-resolution, which contains reference images paired with low-resolution inputs with varying degrees of similarity. Both objective and subjective evaluations demonstrate the great potential of using reference images as well as the superiority of our results over other state-of-the-art methods.", "title": "" }, { "docid": "d747857cda669738cc8c27cc0a92a95d", "text": "Angle of Arrival (AoA) estimation that applies wideband channel estimation is superior to the narrowband MUSIC (multiple signal classification) approach, even when averaging its results over the entire relevant band. This work reports the results of indoor AoA estimation based on wideband propagation channel measurements taken over a uniform linear antenna array. The measurements were carried out around 2.4 GHz, with 50 to 800 MHz bandwidths.", "title": "" }, { "docid": "ed3d82783cf084ae4c36aece1045c64f", "text": "The current malicious URLs detecting techniques based on whole URL information are hard to detect the obfuscated malicious URLs. The most precise way to identify a malicious URL is verifying the corresponding web page contents. However, it costs very much in time, traffic and computing resource. Therefore, a filtering process that detecting more suspicious URLs which should be further verified is required in practice. In this work, we propose a suspicious URL filtering approach based on multi-view analysis in order to reduce the impact from URL obfuscation techniques. URLs are composed of several portions, each portion has a specific use. The proposed method intends to learn the characteristics from multiple portions (multi-view) of URLs for giving the suspicion level of each portion. Adjusting the suspicion threshold of each portion, the proposed system would select the most suspicious URLs. This work uses the real dataset from T. Co. to evaluate the proposed system. The requests from T. Co. are (1) detection rate should be less than 25%, (2) missing rate should be lower than 25%, and (3) the process with one hour data should be end in an hour. The experiment results show that our approach is effective, is capable to reserve more malicious URLs in the selected suspicious ones and satisfy the requests given by practical environment, such as T. Co. daily works.", "title": "" }, { "docid": "d18a2e1811f2d11e88c9ae780a8ede23", "text": "In this paper, we present the design of error-resilient machine learning architectures by employing a distributed machine learning framework referred to as classifier ensemble (CE). CE combines several simple classifiers to obtain a strong one. In contrast, centralized machine learning employs a single complex block. We compare the random forest (RF) and the support vector machine (SVM), which are representative techniques from the CE and centralized frameworks, respectively. Employing the dataset from UCI machine learning repository and architecturallevel error models in a commercial 45 nm CMOS process, it is demonstrated that RF-based architectures are significantly more robust than SVM architectures in presence of timing errors due to process variations in near-threshold voltage (NTV) regions (0.3 V 0.7 V). In particular, the RF architecture exhibits a detection accuracy (Pdet) that varies by 3.2% while maintaining a median Pdet ≥ 0.9 at a gate level delay variation of 28.9% . In comparison, SVM exhibits a Pdet that varies by 16.8%. Additionally, we propose an error weighted voting technique that incorporates the timing error statistics of the NTV circuit fabric to further enhance robustness. Simulation results confirm that the error weighted voting achieves a Pdet that varies by only 1.4%, which is 12× lower compared to SVM.", "title": "" }, { "docid": "f3ed24816a14d2c9d96e4d74f9ca5b52", "text": "Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5% greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18%.", "title": "" }, { "docid": "14494622fc47aa261038c10153dbb828", "text": "This article describes a robust semantic parser that uses a broad knowledge base created by interconnecting three major resources: FrameNet, VerbNet and PropBank. The FrameNet corpus contains the examples annotated with semantic roles whereas the VerbNet lexicon provides the knowledge about the syntactic behavior of the verbs. We connect VerbNet and FrameNet by mapping the FrameNet frames to the VerbNet Intersective Levin classes. The PropBank corpus, which is tightly connected to the VerbNet lexicon, is used to increase the verb coverage and also to test the effectiveness of our approach. The results indicate that our model is an interesting step towards the design of more robust semantic parsers.", "title": "" }, { "docid": "4f3177b303b559f341b7917683114257", "text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.", "title": "" }, { "docid": "8304509377e6abecfc62f5bcc76be519", "text": "This paper developed a practical split-window (SW) algorithm to estimate land surface temperature (LST) from Thermal Infrared Sensor (TIRS) aboard Landsat 8. The coefficients of the SW algorithm were determined based on atmospheric water vapor sub-ranges, which were obtained through a modified split-window covariance–variance ratio method. The channel emissivities were acquired from newly released global land cover products at 30 m and from a fraction of the vegetation cover calculated from visible and near-infrared images aboard Landsat 8. Simulation results showed that the new algorithm can obtain LST with an accuracy of better than 1.0 K. The model consistency to the noise of the brightness temperature, emissivity and water vapor was conducted, which indicated the robustness of the new algorithm in LST retrieval. Furthermore, based on comparisons, the new algorithm performed better than the existing algorithms in retrieving LST from TIRS data. Finally, the SW algorithm was proven to be reliable through application in different regions. To further confirm the credibility of the SW algorithm, the LST will be validated in the future.", "title": "" }, { "docid": "55d92c6a46c491a5cc8d727536077c3c", "text": "Given a collection of objects and an associated similarity measure, the all-pairs similarity search problem asks us to find all pairs of objects with similarity greater than a certain user-specified threshold. Locality-sensitive hashing (LSH) based methods have become a very popular approach for this problem. However, most such methods only use LSH for the first phase of similarity search i.e. efficient indexing for candidate generation. In this paper, we presentBayesLSH, a principled Bayesian algorithm for the subsequent phase of similarity search performing candidate pruning and similarity estimation using LSH. A simpler variant, BayesLSHLite, which calculates similarities exactly, is also presented. BayesLSH is able to quickly prune away a large majority of the false positive candidate pairs, leading to significant speedups over baseline approaches. For BayesLSH, we also provide probabilistic guarantees on the quality of the output, both in terms of accuracy and recall. Finally, the quality of BayesLSH’s output can be easily tuned and does not require any manual setting of the number of hashes to use for similarity estimation, unlike standard approaches. For two state-of-the-art candidate generation algorithms, AllPairs [3] and LSH, BayesLSH enables significant speedups, typically in the range 2x-20x for a wide variety of datasets.", "title": "" }, { "docid": "df97ff54b80a096670c7771de1f49b6d", "text": "In recent times, Bitcoin has gained special attention both from industry and academia. The underlying technology that enables Bitcoin (or more generally crypto-currency) is called blockchain. At the core of the blockchain technology is a data structure that keeps record of the transactions in the network. The special feature that distinguishes it from existing technology is its immutability of the stored records. To achieve immutability, it uses consensus and cryptographic mechanisms. As the data is stored in distributed nodes this technology is also termed as \"Distributed Ledger Technology (DLT)\". As many researchers and practitioners are joining the hype of blockchain, some of them are raising the question about the fundamental difference between blockchain and traditional database and its real value or potential. In this paper, we present a critical analysis of both technologies based on a survey of the research literature where blockchain solutions are applied to various scenarios. Based on this analysis, we further develop a decision tree diagram that will help both practitioners and researchers to choose the appropriate technology for their use cases. Using our proposed decision tree we evaluate a sample of the existing works to see to what extent the blockchain solutions have been used appropriately in the relevant problem domains.", "title": "" }, { "docid": "48cfb0c1b3b2ce7ce00aa972a3e599e7", "text": "This paper discusses some relevant work of emotion detection from text which is a main field in affecting computing and artificial intelligence field. Artificial intelligence is not only the ability for a machine to think or interact with end user smartly but also to act humanly or rationally so emotion detection from text plays a key role in human-computer interaction. It has attracted the attention of many researchers due to the great revolution of emotional data available on social and web applications of computers and much more in mobile devices. This survey mainly collects history of unsupervised emotion detection from text.", "title": "" }, { "docid": "b99b9f80b4f0ca4a8d42132af545be76", "text": "By: Catherine L. Anderson Decision, Operations, and Information Technologies Department Robert H. Smith School of Business University of Maryland Van Munching Hall College Park, MD 20742-1815 U.S.A. Catherine_Anderson@rhsmith.umd.edu Ritu Agarwal Center for Health Information and Decision Systems University of Maryland 4327 Van Munching Hall College Park, MD 20742-1815 U.S.A. ragarwal@rhsmith.umd.edu", "title": "" }, { "docid": "9a972fe5e02400799310bfcc105f36b4", "text": "Twitter provides access to large volumes of data in real time, but is notoriously noisy, hampering its utility for NLP. In this article, we target out-of-vocabulary words in short text messages and propose a method for identifying and normalizing lexical variants. Our method uses a classifier to detect lexical variants, and generates correction candidates based on morphophonemic similarity. Both word similarity and context are then exploited to select the most probable correction candidate for the word. The proposed method doesn't require any annotations, and achieves state-of-the-art performance over an SMS corpus and a novel dataset based on Twitter.", "title": "" } ]
scidocsrr
fa79db2b9c9d6317c0f0ac0c99f6fac7
Rotation Equivariant CNNs for Digital Pathology
[ { "docid": "ec90e30c0ae657f25600378721b82427", "text": "We use deep max-pooling convolutional neural networks to detect mitosis in breast histology images. The networks are trained to classify each pixel in the images, using as context a patch centered on the pixel. Simple postprocessing is then applied to the network output. Our approach won the ICPR 2012 mitosis detection competition, outperforming other contestants by a significant margin.", "title": "" }, { "docid": "85b9cd3e6f0f55ad4aea17a52e25bcf8", "text": "Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.", "title": "" } ]
[ { "docid": "b766fe26da9106d65a72b564594e28e6", "text": "The thalamus has long been seen as responsible for relaying information on the way to the cerebral cortex, but it has not been until the last decade or so that the functional nature of this relay has attracted significant attention. Whereas earlier views tended to relegate thalamic function to a simple, machine-like relay process, recent research, reviewed in this article, demonstrates complicated circuitry and a rich array of membrane properties underlying the thalamic relay. It is now clear that the thalamic relay does not have merely a trivial function. Suggestions that the thalamic circuits and cell properties only come into play during certain phases of sleep to effectively disconnect the relay are correct as far as they go, but they are incomplete, because they fail to take into account interesting and variable properties of the relay that, we argue, occur during normal waking behavior. Although the specific function of the circuits and cellular properties of the thalamic relay for waking behavior is far from clear, we offer two related hypotheses based on recent experimental evidence. One is that the thalamus is not used just to relay peripheral information from, for example, visual, auditory, or cerebellar inputs, but that some thalamic nuclei are arranged instead to relay information from one cortical area to another. The second is that the thalamus is not a simple, passive relay of information to cortex but instead is involved in many dynamic processes that significantly alter the nature of the information relayed to cortex.", "title": "" }, { "docid": "1a38f4218ab54ff22c776eb5572409bf", "text": "Deep learning has achieved significant improvement in various machine learning tasks including image recognition, speech recognition, machine translation and etc. Inspired by the huge success of the paradigm, there have been lots of tries to apply deep learning algorithms to data analytics problems with big data including traffic flow prediction. However, there has been no attempt to apply the deep learning algorithms to the analysis of air traffic data. This paper investigates the effectiveness of the deep learning models in the air traffic delay prediction tasks. By combining multiple models based on the deep learning paradigm, an accurate and robust prediction model has been built which enables an elaborate analysis of the patterns in air traffic delays. In particular, Recurrent Neural Networks (RNN) has shown its great accuracy in modeling sequential data. Day-to-day sequences of the departure and arrival flight delays of an individual airport have been modeled by the Long Short-Term Memory RNN architecture. It has been shown that the accuracy of RNN improves with deeper architectures. In this study, four different ways of building deep RNN architecture are also discussed. Finally, the accuracy of the proposed prediction model was measured, analyzed and compared with previous prediction methods. It shows best accuracy compared with all other methods.", "title": "" }, { "docid": "e2606242fcc89bfcf5c9c4cd71dd2c18", "text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.", "title": "" }, { "docid": "0e5eb8191cea7d3a59f192aa32a214c4", "text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.", "title": "" }, { "docid": "226f84ed038a4509d9f3931d7df8b977", "text": "Physically Asynchronous/Logically Synchronous (PALS) is an architecture pattern that allows developers to design and verify a system as though all nodes executed synchronously. The correctness of PALS protocol was formally verified. However, the implementation of PALS adds additional code that is otherwise not needed. In our case, we have a middleware (PALSWare) that supports PALS systems. In this paper, we introduce a verification framework that shows how we can apply Software Model Checking (SMC) to verify a PALS system at the source code level. SMC is an automated and exhaustive source code checking technology. Compared to verifying (hardware or software) models, verifying the actual source code is more useful because it minimizes any chance of false interpretation and eliminates the possibility of missing software bugs that were absent in the model but introduced during implementation. In other words, SMC reduces the semantic gap between what is verified and what is executed. Our approach is compositional, i.e., the verification of PALSWare is done separately from applications. Since PALSWare is inherently concurrent, to verify it via SMC we must overcome the statespace explosion problem, which arises from concurrency and asynchrony. To this end, we develop novel simplification abstractions, prove their soundness, and then use these abstractions to reduce the verification of a system with many threads to verifying a system with a relatively small number of threads. When verifying an application, we leverage the (already verified) synchronicity guarantees provided by the PALSWare to reduce the verification complexity significantly. Thus, our approach uses both “abstraction” and “composition”, the two main techniques to reduce statespace explosion. This separation between verification of PALSWare and applications also provides better management against upgrades to either. We validate our approach by verifying the current PALSWare implementation, and several PALSWare-based distributed real time applications.", "title": "" }, { "docid": "26709fe90e9780c61402da32f91de684", "text": "Recent years have seen remarkable progress in semantic segmentation. Yet, it remains a challenging task to apply segmentation techniques to video-based applications. Specifically, the high throughput of video streams, the sheer cost of running fully convolutional networks, together with the low-latency requirements in many real-world applications, e.g. autonomous driving, present a significant challenge to the design of the video segmentation framework. To tackle this combined challenge, we develop a framework for video semantic segmentation, which incorporates two novel components: (1) a feature propagation module that adaptively fuses features over time via spatially variant convolution, thus reducing the cost of per-frame computation: and (2) an adaptive scheduler that dynamically allocate computation based on accuracy prediction. Both components work together to ensure low latency while maintaining high segmentation quality. On both Cityscapes and CamVid, the proposed framework obtained competitive performance compared to the state of the art, while substantially reducing the latency, from 360 ms to 119 ms.", "title": "" }, { "docid": "852b4c7b434937299a82c4b8aa3f264e", "text": "Baer's review (2003; this issue) suggests that mindfulness-based interventions are clinically efficacious, but that better designed studies are now needed to substantiate the field and place it on a firm foundation for future growth. Her review, coupled with other lines of evidence, suggests that interest in incorporating mindfulness into clinical interventions in medicine and psychology is growing. It is thus important that professionals coming to this field understand some of the unique factors associated with the delivery of mindfulness-based interventions and the potential conceptual and practical pitfalls of not recognizing the features of this broadly unfamiliar landscape. This commentary highlights and contextualizes (1) what exactly mindfulness is, (2) where it came from, (3) how it came to be introduced into medicine and health care, (4) issues of cross-cultural sensitivity and understanding in the study of meditative practices stemming from other cultures and in applications of them in novel settings, (5) why it is important for people who are teaching mind-fulness to practice themselves, (6) results from 3 recent Health Care, and Society not reviewed by Baer but which raise a number of key questions about clinical applicability , study design, and mechanism of action, and (7) current opportunities for professional training and development in mindfulness and its clinical applications. Iappreciate the opportunity to comment on Baer's (2003; this issue) review of mindfulness training as clinical intervention and to add my own reflections on the emergence of mindfulness in a clinical context, especially in a journal explicitly devoted to both science and practice. The universe of mindfulness 1 brings with it a whole new meaning and thrust to the word practice, one which I believe has the potential to contribute profoundly to the further development of the field of clinical psychology and its allied disciplines , behavioral medicine, psychosomatic medicine, and health psychology, through both a broadening of research approaches to mind/body interactions and the development of new classes of clinical interventions. I find the Baer review to be evenhanded, cogent, and perceptive in its description and evaluation of the work that has been published through the middle of 2001, work that features mindfulness training as the primary element in various clinical interventions. It complements nicely the recent review by Bishop (2002), which to my mind ignores some of the most important, if difficult to define, features of such interventions in its emphasis on the perceived need", "title": "" }, { "docid": "6bdfbe239ae9fb6ad1911ae3a066ded6", "text": "Virtual machines with advances in software and architectural support have emerged as the basis for enterprises to allocate resources recently. One main benefit of virtual machine is server consolidation. However, flexible and complex consolidation results in some unpredictable performance problems and introduces new requirements, such as proper configurations for scheduler and reasonable arrangements for services. In this paper, we present a comparative performance evaluation of several different typical application consolidations in different configurations of scheduler parameters (under Credit scheduler) in Xen. We analyze the impact of the configurations of scheduler and mutual impact between VMs which run different types of applications, present proposals for users to adopt an efficient scheduler configuration in applying virtualization, and offer insight into the relationship of performance and scheduler parameters to motivate future innovation in virtualization.", "title": "" }, { "docid": "113373d6a9936e192e5c3ad016146777", "text": "This paper examines published data to develop a model for detecting factors associated with false financia l statements (FFS). Most false financial statements in Greece can be identified on the basis of the quantity and content of the qualification s in the reports filed by the auditors on the accounts. A sample of a total of 76 firms includes 38 with FFS and 38 non-FFS. Ten financial variables are selected for examination as potential predictors of FFS. Univariate and multivariate statistica l techniques such as logistic regression are used to develop a model to identify factors associated with FFS. The model is accurate in classifying the total sample correctly with accuracy rates exceeding 84 per cent. The results therefore demonstrate that the models function effectively in detecting FFS and could be of assistance to auditors, both internal and external, to taxation and other state authorities and to the banking system. the empirical results and discussion obtained using univariate tests and multivariate logistic regression analysis. Finally, in the fifth section come the concluding remarks.", "title": "" }, { "docid": "d8d93b00f8b9352a528b3c59d77bb58d", "text": "BACKGROUND\nThe gut microbiome is increasingly recognized as a contributor to disease states. Patients with type 1 diabetes (DM1) have distinct gut microbiota in comparison to non-diabetic individuals, and it has been linked to changes in intestinal permeability, inflammation and insulin resistance. Prebiotics are non-digestible carbohydrates that alter gut microbiota and could potentially improve glycemic control in children with DM1. This pilot study aims to determine the feasibility of a 12-week dietary intervention with prebiotics in children with DM1.\n\n\nMETHODS/DESIGN\nThis pilot study is a single-centre, randomized, double-blind, placebo-controlled trial in children aged 8 to 17 years with DM1 for at least one year. Participants will be randomized to receive either placebo (maltodextrin 3.3 g orally/day) or prebiotics (oligofructose-enriched inulin 8 g orally/day; Synergy1, Beneo, Mannheim, Germany). Measures to be assessed at baseline, 3 months and 6 months include: anthropometric measures, insulin doses/regimens, frequency of diabetic ketoacidosis, frequency of severe hypoglycemia, average number of episodes of hypoglycemia per week, serum C-peptide, HbA1c, serum inflammatory markers (IL-6, IFN-gamma, TNF-alpha, and IL-10), GLP-1 and GLP-2, intestinal permeability using urine assessment after ingestion of lactulose, mannitol and 3-O-methylglucose, and stool sample collection for gut microbiota profiling.\n\n\nDISCUSSION\nThis is a novel pilot study designed to test feasibility for a fully powered study. We hypothesize that consumption of prebiotics will alter gut microbiota and intestinal permeability, leading to improved glycemic control. Prebiotics are a potentially novel, inexpensive, low-risk treatment addition for DM1 that may improve glycemic control by changes in gut microbiota, gut permeability and inflammation.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov: NCT02442544 . Registered on 10 March 2015.", "title": "" }, { "docid": "8c5a76124b7d37929cef1a7a67eae3ba", "text": "This paper describes the ongoing development of a highly configurable word processing environment developed using a pragmatic, obstacle-by-obstacle approach to alleviating some of the visual problems encountered by dyslexic computer users. The paper describes the current version of the software and the development methodology as well as the results of a pilot study which indicated that visual environment individually configured using the SeeWord software improved reading accuracy as well as subjectively rated reading comfort.", "title": "" }, { "docid": "f9b2713d5d668bf2d6fe7141126375e2", "text": "In the course of daily living, humans frequently encounter situations in which a motor activity, once initiated, becomes unnecessary or inappropriate. Under such circumstances, the ability to inhibit motor responses can be of vital importance. Although the nature of response inhibition has been studied in psychology for several decades, its neural basis remains unclear. Using transcranial magnetic stimulation, we found that temporary deactivation of the pars opercularis in the right inferior frontal gyrus selectively impairs the ability to stop an initiated action. Critically, deactivation of the same region did not affect the ability to execute responses, nor did it influence physiological arousal. These findings confirm and extend recent reports that the inferior frontal gyrus is vital for mediating response inhibition.", "title": "" }, { "docid": "ccd64b0be6fee634e928206867ab4116", "text": "CASE REPORT A 55 year old female was referred for investigation and possible surgery of a thyroid swelling. She had smoked 30 cigarettes per day for many years. Past medical history consisted of insertion of bilateral silicone breast implants 10 years previously. Clinical examination suggested a multinodular goitre and identified slight thickening superior to the left breast implant. Investigations revealed normal blood tests, and a multinodular goitre was confirmed on ultrasound scan. Routine chest X-ray (Fig. 1) identified an opacity in the left upper lobe showing features suggestive of a primary lung tumour. However, a lateral view failed to detect an abnormality in the thoracic cavity. CT scanning, performed with a view to percutaneous biopsy, revealed that the \"lung tumour\" was in fact related to the silicone implant (Fig. 2). Subsequent surgery confirmed rupture of the left breast prosthesis.", "title": "" }, { "docid": "d473619f76f81eced041df5bc012c246", "text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.", "title": "" }, { "docid": "116c2138e2517b51cb1a01e4edbd515b", "text": "This technical demo presents a novel emotion-based music retrieval platform, called Mr. Emo, for organizing and browsing music collections. Unlike conventional approaches which quantize emotions into classes, Mr. Emo defines emotions by two continuous variables arousal and valence and employs regression algorithms to predict them. Associated with arousal and valence values (AV values), each music sample becomes a point in the arousal-valence emotion plane, so a user can easily retrieve music samples of certain emotion(s) by specifying a point or a trajectory in the emotion plane. Being content centric and functionally powerful, such emotion-based retrieval complements traditional keyword- or artist-based retrieval. The demo shows the effectiveness and novelty of music retrieval in the emotion plane.", "title": "" }, { "docid": "b2e81e7730a835a875d0d78d84084c1b", "text": "User experience of smart mobile devices can be improved in numerous scenarios with the assist of in-air gesture recognition. Most existing methods proposed by industry and academia are based on special sensors. On the contrary, a special sensor-independent in-air gesture recognition method named Dolphin is proposed in this paper which can be applied to off-the-shelf smart devices directly. The only sensors Dolphin needs are the loudspeaker and microphone embedded in the device. Dolphin emits a continuous 21 KHz tone by the loudspeaker and receive the gesture-reflecting ultrasonic wave by the microphone. The gesture performed is encoded into the reflected ultrasonic in the form of Doppler shift. By combining manual recognition and machine leaning methods, Dolphin extracts features from Doppler shift and recognizes a rich set of pre-defined gestures with high accuracy in real time. Parameter selection strategy and gesture recognition under several scenarios are discussed and evaluated in detail. Dolphin can be adapted to multiple devices and users by training using machine learning methods.", "title": "" }, { "docid": "07e1659d504d773107b8b49ecc090496", "text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.10.050 ⇑ Corresponding author. Tel.: +86 29 85307830; fax E-mail addresses: xiejuany@snnu.edu.cn (J. Xie) (C. Wang). In this paper, we developed a diagnosis model based on support vector machines (SVM) with a novel hybrid feature selection method to diagnose erythemato-squamous diseases. Our proposed hybrid feature selection method, named improved F-score and Sequential Forward Search (IFSFS), combines the advantages of filter and wrapper methods to select the optimal feature subset from the original feature set. In our IFSFS, we improved the original F-score from measuring the discrimination of two sets of real numbers to measuring the discrimination between more than two sets of real numbers. The improved Fscore and Sequential Forward Search (SFS) are combined to find the optimal feature subset in the process of feature selection, where, the improved F-score is an evaluation criterion of filter method, and SFS is an evaluation system of wrapper method. The best parameters of kernel function of SVM are found out by grid search technique. Experiments have been conducted on different training-test partitions of the erythemato-squamous diseases dataset taken from UCI (University of California Irvine) machine learning database. Our experimental results show that the proposed SVM-based model with IFSFS achieves 98.61% classification accuracy and contains 21 features. With these results, we conclude our method is very promising compared to the previously reported results. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "eb4b29fe1b388349c6b020a381fdce63", "text": "Hybrid AC/DC microgrids have been planned for the better interconnection of different distributed generation systems (DG) to the power grid, and exploiting the prominent features of both ac and dc microgrids. Connecting these microgrids requires an interlinking AC/DC converter (IC) with a proper power management and control strategy. During the islanding operation of the hybrid AC/DC microgrid, the IC is intended to take the role of supplier to one microgrid and at the same time acts as a load to the other microgrid and the power management system should be able to share the power demand between the existing AC and dc sources in both microgrids. This paper considers the power flow control and management issues amongst multiple sources dispersed throughout both ac and dc microgrids. The paper proposes a decentralized power sharing method in order to eliminate the need for any communication between DGs or microgrids. This hybrid microgrid architecture allows different ac or dc loads and sources to be flexibly located in order to decrease the required power conversions stages and hence the system cost and efficiency. The performance of the proposed power control strategy is validated for different operating conditions, using simulation studies in the PSCAD/EMTDC software environment.", "title": "" }, { "docid": "2b7a8590fe5e73d254a5be2ba3c1ee5b", "text": "High resolution magnetic resonance (MR) imaging is desirable in many clinical applications due to its contribution to more accurate subsequent analyses and early clinical diagnoses. Single image super resolution (SISR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. In the past few years, SISR methods based on deep learning techniques, especially convolutional neural networks (CNNs), have achieved state-of-the-art performance on natural images. However, the information is gradually weakened and training becomes increasingly difficult as the network deepens. The problem is more serious for medical images because lacking high quality and effective training samples makes deep models prone to underfitting or overfitting. Nevertheless, many current models treat the hierarchical features on different channels equivalently, which is not helpful for the models to deal with the hierarchical features discriminatively and targetedly. To this end, we present a novel channel splitting network (CSN) to ease the representational burden of deep models. The proposed CSN model divides the hierarchical features into two branches, i.e., residual branch and dense branch, with different information transmissions. The residual branch is able to promote feature reuse, while the dense branch is beneficial to the exploration of new features. Besides, we also adopt the merge-and-run mapping to facilitate information integration between different branches. Extensive experiments on various MR images, including proton density (PD), T1 and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.", "title": "" }, { "docid": "99f29ef2ec2e72b23e654ff0db23b2dc", "text": "We consider the problem of learning Boltzmann machine classifiers from relational data. Our goal is to extend the deep belief framework of RBMs to statistical relational models. This allows one to exploit the feature hierarchies and the non-linearity inherent in RBMs over the rich representations used in statistical relational learning (SRL). Specifically, we use lifted random walks to generate features for predicates that are then used to construct the observed features in the RBM in a manner similar to Markov Logic Networks. We show empirically that this method of constructing an RBM is comparable or better than the state-of-theart probabilistic relational learning algorithms on four relational domains.", "title": "" } ]
scidocsrr
f50cbaf1cb14ba172be53e99d9ef3a1a
Gender identities and gender dysphoria in the Netherlands.
[ { "docid": "55969912d37a5550953b954ba4efd7d3", "text": "Apart from some general issues related to the Gender Identity Disorder (GID) diagnosis, such as whether it should stay in the DSM-V or not, a number of problems specifically relate to the current criteria of the GID diagnosis for adolescents and adults. These problems concern the confusion caused by similarities and differences of the terms transsexualism and GID, the inability of the current criteria to capture the whole spectrum of gender variance phenomena, the potential risk of unnecessary physically invasive examinations to rule out intersex conditions (disorders of sex development), the necessity of the D criterion (distress and impairment), and the fact that the diagnosis still applies to those who already had hormonal and surgical treatment. If the diagnosis should not be deleted from the DSM, most of the criticism could be addressed in the DSM-V if the diagnosis would be renamed, the criteria would be adjusted in wording, and made more stringent. However, this would imply that the diagnosis would still be dichotomous and similar to earlier DSM versions. Another option is to follow a more dimensional approach, allowing for different degrees of gender dysphoria depending on the number of indicators. Considering the strong resistance against sexuality related specifiers, and the relative difficulty assessing sexual orientation in individuals pursuing hormonal and surgical interventions to change physical sex characteristics, it should be investigated whether other potentially relevant specifiers (e.g., onset age) are more appropriate.", "title": "" } ]
[ { "docid": "b120095067684a67fe3327d18860e760", "text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.", "title": "" }, { "docid": "feca1bd8b881f3d550f0f0912913081f", "text": "There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.", "title": "" }, { "docid": "8360cf2cda48bc34911f2f5c225b66bf", "text": "We study the cold-start link prediction problem where edges between vertices is unavailable by learning vertex-based similarity metrics. Existing metric learning methods for link prediction fail to consider communities which can be observed in many real-world social networks. Because di↵erent communities usually exhibit di↵erent intra-community homogeneities, learning a global similarity metric is not appropriate. In this paper, we thus propose to learn communityspecific similarity metrics via joint community detection. Experiments on three real-world networks show that the intra-community homogeneities can be well preserved, and the mixed community-specific metrics perform better than a global similarity metric in terms of prediction accuracy.", "title": "" }, { "docid": "13e2b22875e1a23e9e8ea2f80671c74e", "text": "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.", "title": "" }, { "docid": "cc6111093376f0bae267fe686ecd22cd", "text": "This paper overviews the diverse information technologies that are used to provide athletes with relevant feedback. Examples taken from various sports are used to illustrate selected applications of technology-based feedback. Several feedback systems are discussed, including vision, audition and proprioception. Each technology described here is based on the assumption that feedback would eventually enhance skill acquisition and sport performance and, as such, its usefulness to athletes and coaches in training is critically evaluated.", "title": "" }, { "docid": "f73c88a8a6d0bd1790e8c8a5b73619a6", "text": "This critical review examines the evidence evaluating the efficacy of non-speech oral motor exercises (NSOMEs) as a treatment approach for children with phonological/articulation disorders. Research studies include one randomized clinical trial design, one single group pre-test post-test design and one single subject design. Overall, the evidence does not support the use of NSOMEs to treat children with phonological/articulation disorders. Future and clinical recommendations are discussed.", "title": "" }, { "docid": "93e5c9fcb14c4b409e196079be40db9c", "text": "Creativity cannot exist in a vacuum; it develops through feedback, learning, reflection and social interaction with others. However, this perspective has been relatively under-investigated in computational creativity research, which typically examines systems that operate individually. We develop a thought experiment showing how structured dialogues can help develop the creative aspects of computer poetry. Centrally in this approach, we ask questions of a poem, inviting it to tell us in what way it may be considered a “creative making.”", "title": "" }, { "docid": "22727f9a6951582de1e98b522b40f68e", "text": "High-speed electric machines are becoming increasingly important and utilized in many applications. This paper addresses the considerations and challenges of the rotor design of high-speed surface permanent magnet machines. The paper focuses particularly on mechanical aspects of the design. Special attention is given to the rotor sleeve design including thickness and material. Permanent magnet design parameters are discussed. Surface permanent magnet rotor dynamic considerations and challenges are also discussed.", "title": "" }, { "docid": "532463ff1e5e91a2f9054cb86dcfa654", "text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.", "title": "" }, { "docid": "87c1d39dd39375f40306416077f3cb22", "text": "For any AND-OR formula of size N, there exists a bounded-error N1/2+o(1)-time quantum algorithm, based on a discrete-time quantum walk, that evaluates this formula on a black-box input. Balanced, or \"approximately balanced,\" formulas can be evaluated in O(radicN) queries, which is optimal. It follows that the (2-o(1))th power of the quantum query complexity is a lower bound on the formula size, almost solving in the positive an open problem posed by Laplante, Lee and Szegedy.", "title": "" }, { "docid": "4dd84cdcb1bae5dcbaebafb5b234551e", "text": "In recent years, LPWAN technology, designed to realize low-power and long distance communication, has attracted much attention. Among several LPWAN technologies, Long Range (LoRa) is one of the most competitive physical layer protocol. Long Range Wide Area Network (LoRaWAN) is an upper layer protocol used with LoRa, and it provides several security functions including random number-based replay attack prevention system. According to recent studies, current replay attack prevention of LoRaWAN can mislead benign messages into replay attacks. To resolve this problem, several new replay attack prevention schemes have been proposed. However, existing schemes have limitations such as not being compatible with the existing packet structure or not considering an exceptional situation such as device reset. Therefore, in this paper, we propose a new LoRaWAN replay attack prevention scheme that resolves these problems. Our scheme follows the existing packet structure and is designed to cope with exceptional situations such as device reset. As a result of calculations, in our scheme, the probability that a normal message is mistaken for a replay attack is 60-89% lower than the current LoRaWAN. Real-world experiments also support these results.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2", "text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).", "title": "" }, { "docid": "5aebbb08b705d98dbde9d3efe4affdf8", "text": "The benefit of localized features within the regular domain has given rise to the use of Convolutional Neural Networks (CNNs) in machine learning, with great proficiency in the image classification. The use of CNNs becomes problematic within the irregular spatial domain due to design and convolution of a kernel filter being non-trivial. One solution to this problem is to utilize graph signal processing techniques and the convolution theorem to perform convolutions on the graph of the irregular domain to obtain feature map responses to learnt filters. We propose graph convolution and pooling operators analogous to those in the regular domain. We also provide gradient calculations on the input data and spectral filters, which allow for the deep learning of an irregular spatial domain problem. Signal filters take the form of spectral multipliers, applying convolution in the graph spectral domain. Applying smooth multipliers results in localized convolutions in the spatial domain, with smoother multipliers providing sharper feature maps. Algebraic Multigrid is presented as a graph pooling method, reducing the resolution of the graph through agglomeration of nodes between layers of the network. Evaluation of performance on the MNIST digit classification problem in both the regular and irregular domain is presented, with comparison drawn to standard CNN. The proposed graph CNN provides a deep learning method for the irregular domains present in the machine learning community, obtaining 94.23% on the regular grid, and 94.96% on a spatially irregular subsampled MNIST.", "title": "" }, { "docid": "6e7a8f04a24d746b36cc5e9ac6e622f0", "text": "With the widespread of social media websites in the internet, and the huge number of users participating and generating infinite number of contents in these websites, the need for personalisation increases dramatically to become a necessity. One of the major issues in personalisation is building users’ profiles, which depend on many elements; such as the used data, the application domain they aim to serve, the representation method and the construction methodology. Recently, this area of research has been a focus for many researchers, and hence, the proposed methods are increasing very quickly. This survey aims to discuss the available user modelling techniques for social media websites, and to highlight the weakness and strength of these methods and to provide a vision for future work in user modelling in social media websites.", "title": "" }, { "docid": "cc9f566eb8ef891d76c1c4eee7e22d47", "text": "In this study, a hybrid artificial intelligent (AI) system integrating neural network and expert system is proposed to support foreign exchange (forex) trading decisions. In this system, a neural network is used to predict the forex price in terms of quantitative data, while an expert system is used to handle qualitative factor and to provide forex trading decision suggestions for traders incorporating experts' knowledge and the neural network's results. The effectiveness of the proposed hybrid AI system is illustrated by simulation experiments", "title": "" }, { "docid": "4283e30dd71ca479d09def1115ba6410", "text": "This paper presents the design and implementation of a bounding controller for the MIT Cheetah 2 and its experimental results. The paper introduces the architecture of the controller along with the functional roles of the subcomponents. The application of impulse scaling provides feedforward force profiles that automatically adapt across a wide range of speeds. A discrete gait pattern stabilizer maintains the footfall sequence and timing. Continuous feedback is layered to manage balance during the stance phase. Stable hybrid limit cycles are exhibited in simulation using simplified models, and are further validated in untethered 3D bounding experiments. Experiments are conducted both indoors and outdoors on various man-made and natural terrains. The control framework is shown to provide stable bounding in the hardware, at speeds of up to 6.4 m/s and with a minimum total cost of transport of 0.47. These results are unprecedented accomplishments in terms of efficiency and speed in untethered experimental quadruped machines.", "title": "" }, { "docid": "a1306f761e45fdd56ae91d1b48909d74", "text": "We propose a graphical model for representing networks of stochastic processes, the minimal generative model graph. It is based on reduced factorizations of the joint distribution over time. We show that under appropriate conditions, it is unique and consistent with another type of graphical model, the directed information graph, which is based on a generalization of Granger causality. We demonstrate how directed information quantifies Granger causality in a particular sequential prediction setting. We also develop efficient methods to estimate the topological structure from data that obviate estimating the joint statistics. One algorithm assumes upper bounds on the degrees and uses the minimal dimension statistics necessary. In the event that the upper bounds are not valid, the resulting graph is nonetheless an optimal approximation in terms of Kullback-Leibler (KL) divergence. Another algorithm uses near-minimal dimension statistics when no bounds are known, but the distribution satisfies a certain criterion. Analogous to how structure learning algorithms for undirected graphical models use mutual information estimates, these algorithms use directed information estimates. We characterize the sample-complexity of two plug-in directed information estimators and obtain confidence intervals. For the setting when point estimates are unreliable, we propose an algorithm that uses confidence intervals to identify the best approximation that is robust to estimation error. Last, we demonstrate the effectiveness of the proposed algorithms through the analysis of both synthetic data and real data from the Twitter network. In the latter case, we identify which news sources influence users in the network by merely analyzing tweet times.", "title": "" }, { "docid": "489127100b00493d81dc7644648732ad", "text": "This paper presents a software tool - called Fractal Nature - that provides a set of fractal and physical based methods for creating realistic terrains called Fractal Nature. The output of the program can be used for creating content for video games and serious games. The approach for generating the terrain is based on noise filters, such as Gaussian distribution, capable of rendering highly realistic environments. It is demonstrated how a random terrain can change its shape and visual appearance containing artefacts such as steep slopes and smooth riverbeds. Moreover, two interactive erosion systems, hydraulic and thermal, were implemented. An initial evaluation with 12 expert users provided useful feedback for the applicability of the algorithms in video games as well as insights for future improvements.", "title": "" }, { "docid": "91f4d79da3cac7a8e5aa4ea211032c12", "text": "Will CRISPR usher in a new era of Promethean overreach? CRISPR makes gene editing widely available and cheap. Anti-play-god bioethicists fear that geneticists will play god and precipitate a backlash from nature that could be devastating. In contrast to the anti-play-god bioethicists, this article recommends that laboratory science invoke the Precautionary Principle: pause at the yellow caution light, but then with constant risk-assessment proceed ahead.", "title": "" } ]
scidocsrr
4682d6ef8bbb25c37889324cf3df8a71
Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter
[ { "docid": "79ece5e02742de09b01908668383e8f2", "text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.", "title": "" }, { "docid": "e6cae5bec5bb4b82794caca85d3412a2", "text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.", "title": "" }, { "docid": "726d0b31638e945b2620eca6824b84dd", "text": "Profanity detection is often thought to be an easy task. However, past work has shown that current, list-based systems are performing poorly. They fail to adapt to evolving profane slang, identify profane terms that have been disguised or only partially censored (e.g., @ss, f$#%) or intentionally or unintentionally misspelled (e.g., biatch, shiiiit). For these reasons, they are easy to circumvent and have very poor recall. Secondly, they are a one-size fits all solution – making assumptions that the definition, use and perceptions of profane or inappropriate holds across all contexts. In this article, we present work that attempts to move beyond list-based profanity detection systems by identifying the context in which profanity occurs. The proposed system uses a set of comments from a social news site labeled by Amazon Mechanical Turk workers for the presence of profanity. This system far surpasses the performance of listbased profanity detection techniques. The use of crowdsourcing in this task suggests an opportunity to build profanity detection systems tailored to sites and communities.", "title": "" } ]
[ { "docid": "587253c0196c15c918178b42e25f3180", "text": "Deep Learning methods are currently the state-of-the-art in many Computer Vision and Image Processing problems, in particular image classification. After years of intensive investigation, a few models matured and became important tools, including Convolutional Neural Networks (CNNs), Siamese and Triplet Networks, Auto-Encoders (AEs) and Generative Adversarial Networks (GANs). The field is fast-paced and there is a lot of terminologies to catch up for those who want to adventure in Deep Learning waters. This paper has the objective to introduce the most fundamental concepts of Deep Learning for Computer Vision in particular CNNs, AEs and GANs, including architectures, inner workings and optimization. We offer an updated description of the theoretical and practical knowledge of working with those models. After that, we describe Siamese and Triplet Networks, not often covered in tutorial papers, as well as review the literature on recent and exciting topics such as visual stylization, pixel-wise prediction and video processing. Finally, we discuss the limitations of Deep Learning for Computer Vision.", "title": "" }, { "docid": "135ceae69b9953cf8fe989dcf8d3d0da", "text": "Recent advances in development of Wireless Communication in Vehicular Adhoc Network (VANET) has provided emerging platform for industrialists and researchers. Vehicular adhoc networks are multihop networks with no fixed infrastructure. It comprises of moving vehicles communicating with each other. One of the main challenge in VANET is to route the data efficiently from source to destination. Designing an efficient routing protocol for VANET is tedious task. Also because of wireless medium it is vulnerable to several attacks. Since attacks mislead the network operations, security is mandatory for successful deployment of such technology. This survey paper gives brief overview of different routing protocols. Also attempt has been made to identify major security issues and challenges associated with different routing protocols. .", "title": "" }, { "docid": "47790125ba78325a4455fcdbae96058a", "text": "Today solar energy became an important resource of energy generation. But the efficiency of solar system is very low. To increase its efficiency MPPT techniques are used. The main disadvantage of solar system is its variable voltage. And to obtained a stable voltage from solar panels DC-DC converters are used . DC-DC converters are of mainly three types buck, boost and cuk. This paper presents use of cuk converter with MPPT technique. Generally buck and boost converters used. But by using cuk converter we can step up or step down the voltage level according to the load requirement. The circuit has been simulated by MATLAB and Simulink softwares.", "title": "" }, { "docid": "6880d52a659f71199cb913532e1bd858", "text": "Human skin is a remarkable organ. It consists of an integrated, stretchable network of sensors that relay information about tactile and thermal stimuli to the brain, allowing us to maneuver within our environment safely and effectively. Interest in large-area networks of electronic devices inspired by human skin is motivated by the promise of creating autonomous intelligent robots and biomimetic prosthetics, among other applications. The development of electronic networks comprised of flexible, stretchable, and robust devices that are compatible with large-area implementation and integrated with multiple functionalities is a testament to the progress in developing an electronic skin (e-skin) akin to human skin. E-skins are already capable of providing augmented performance over their organic counterpart, both in superior spatial resolution and thermal sensitivity. They could be further improved through the incorporation of additional functionalities (e.g., chemical and biological sensing) and desired properties (e.g., biodegradability and self-powering). Continued rapid progress in this area is promising for the development of a fully integrated e-skin in the near future.", "title": "" }, { "docid": "054b3f9068c92545e9c2c39e0728ad17", "text": "Data Aggregation is an important topic and a suitable technique in reducing the energy consumption of sensors nodes in wireless sensor networks (WSN’s) for affording secure and efficient big data aggregation. The wireless sensor networks have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and tampering of data. Data integrity protection is proposed, gives an identity-based aggregate signature scheme for wireless sensor networks with a designated verifier. The aggregate signature scheme keeps data integrity, can reduce bandwidth and storage cost. Furthermore, the security of the scheme is effectively presented based on the computation of Diffie-Hellman random oracle model.", "title": "" }, { "docid": "2021f6474af6233c2a919b96dc4758e4", "text": "We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.", "title": "" }, { "docid": "97f6e18ea96e73559a05444d666f306f", "text": "The increasingly ubiquitous availability of digital and networked tools has the potential to fundamentally transform the teaching and learning process. Research on the instructional uses of technology, however, has revealed that teachers often lack the knowledge to successfully integrate technology in their teaching and their attempts tend to be limited in scope, variety, and depth. Thus, technology is used more as “ef fi ciency aids and extension devices” (McCormick & Scrimshaw, 2001 , p. 31) rather than as tools that can “transform the nature of a subject at the most fundamental level” (p. 47). One way in which researchers have tried to better understand how teachers may better use technology in their classrooms has focused on the kinds of knowledge that teachers require Abstract In this chapter, we introduce a framework, called technological pedagogical content knowledge (or TPACK for short), that describes the kinds of knowledge needed by a teacher for effective technology integration. The TPACK framework emphasizes how the connections among teachers’ understanding of content, pedagogy, and technology interact with one another to produce effective teaching. Even as a relatively new framework, the TPACK framework has signi fi cantly in fl uenced theory, research, and practice in teacher education and teacher professional development. In this chapter, we describe the theoretical underpinnings of the framework, and explain the relationship between TPACK and related constructs in the educational technology literature. We outline the various approaches teacher educators have used to develop TPACK in preand in-service teachers, and the theoretical and practical issues that these professional development efforts have illuminated. We then review the widely varying approaches to measuring TPACK, with an emphasis on the interaction between form and function of the assessment, and resulting reliability and validity outcomes for the various approaches. We conclude with a summary of the key theoretical, pedagogical, and methodological issues related to TPACK, and suggest future directions for researchers, practitioners, and teacher educators.", "title": "" }, { "docid": "d191ba553ce49c291b86fca8abdb3022", "text": "Variations in writing styles are commonly used to adapt the content to a specific context, audience, or purpose. However, applying stylistic variations is still by and large a manual process, and there have been little efforts towards automating it. In this paper we explore automated methods to transform text from modern English to Shakespearean English using an end to end trainable neural model with pointers to enable copy action. To tackle limited amount of parallel data, we pre-train embeddings of words by leveraging external dictionaries mapping Shakespearean words to modern English words as well as additional text. Our methods are able to get a BLEU score of 31+, an improvement of≈ 6 points above the strongest baseline.", "title": "" }, { "docid": "76723b8f1c977270aaafab16f95384ea", "text": "The Arimoto-Blahut (1972) algorithm is generalized for computation of the total capacity of discrete memoryless multiple-access channels (MAC). In addition, a class of MAC is defined with the property that the uniform distribution achieves the total capacity. These results are based on the specialization of the Kuhn-Tucker condition for the total capacity of the MAC, and an extension of a known symmetry property for single-user channels.", "title": "" }, { "docid": "dc6ef4268b98d212392e79441f64c98a", "text": "This paper investigates the framework of encoder-decoder with attention for sequence labelling based spoken language understanding. We introduce Bidirectional Long Short Term Memory - Long Short Term Memory networks (BLSTM-LSTM) as the encoder-decoder model to fully utilize the power of deep learning. In the sequence labelling task, the input and output sequences are aligned word by word, while the attention mechanism cannot provide the exact alignment. To address this limitation, we propose a novel focus mechanism for encoder-decoder framework. Experiments on the standard ATIS dataset showed that BLSTM-LSTM with focus mechanism defined the new state-of-the-art by outperforming standard BLSTM and attention based encoder-decoder. Further experiments also show that the proposed model is more robust to speech recognition errors.", "title": "" }, { "docid": "39fcc45d79680c7e231643d6c75aee18", "text": "This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.", "title": "" }, { "docid": "1e464e122d0fe178244fc9af3fa8be25", "text": "Research on sentiment analysis in English language has undergone major developments in recent years. Chinese sentiment analysis research, however, has not evolved significantly despite the exponential growth of Chinese e-business and e-markets. This review paper aims to study past, present, and future of Chinese sentiment analysis from both monolingual and multilingual perspectives. The constructions of sentiment corpora and lexica are first introduced and summarized. Following, a survey of monolingual sentiment classification in Chinese via three different classification frameworks is conducted. Finally, sentiment classification based on the multilingual approach is introduced. After an overview of the literature, we propose that a more human-like (cognitive) representation of Chinese concepts and their inter-connections could overcome the scarceness of available resources and, hence, improve the state of the art. With the increasing expansion of Chinese language on the Web, sentiment analysis in Chinese is becoming an increasingly important research field. Concept-level sentiment analysis, in particular, is an exciting yet challenging direction for such research field which holds great promise for the future.", "title": "" }, { "docid": "14c3d8cee12007dc8af75c7e0df77f00", "text": "A modular magic sudoku solution is a sudoku solution with symbols in {0, 1, ..., 8} such that rows, columns, and diagonals of each subsquare add to zero modulo nine. We count these sudoku solutions by using the action of a suitable symmetry group and we also describe maximal mutually orthogonal families.", "title": "" }, { "docid": "86177ff4fbc089fde87d1acd8452d322", "text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.", "title": "" }, { "docid": "6ae2d8b60e9182300a2392a91e8ca876", "text": "The need for text summarization is crucial as we enter the era of information overload. In this paper we present an automatic summarization system, which generates a summary for a given input document. Our system is based on identification and extraction of important sentences in the input document. We listed a set of features that we collect as part of summary generation process. These features were stored using vector representation model. We defined a ranking function which ranks each sentence as a linear combination of the sentence features. We also discussed about discourse coherence in summaries and techniques to achieve coherent and readable summaries. The experiments showed that the summary generated is coherent the selected features are really helpful in extracting the important information in the document.", "title": "" }, { "docid": "62001361e62e2496204502b491825850", "text": "The anaerobic threshold (AT) has been defined as the theoretical highest exercise level that can be maintained for prolonged periods. It is of practical importance to the competitive endurance athlete to measure progress and plan training programs. The primary objective of this study was to assess the reliability and validity of breakpoint in the respiratory rate (RR) during incremental exercise as a marker for the AT. Secondary objectives were 1) to assess the reliability of the ventilatory threshold (VE) and ventilatory equivalent (VE/VO2) breakpoint, and 2) to assess differences in these 3 methods for their potential to measure change in fitness, as measured by standard error of measurement (SEM), coefficient of variability (CV), and correlation coefficient (R). Fifteen competitive male cyclists (5 category II, 6 category III, 1 category IV, 3 category V United States Cycling Federation) completed 2 maximal oxygen consumption tests within one week on an electronically braked cycle ergometer. A repeated measures Analysis of Variance using 2x3 design (test and methods) resulted in no significant differences (F = 0.02, p = 0.978), indicating that 1)all 3 methods are reproducible, and 2) RR, when compared to VE and VE/VO2, is a valid method of assessing the anaerobic threshold. The lowest SEM, lowest CV and highest R were obtained with the VE method (SEM = 19.4 watts, CV = 6.7%, R = 0.872), compared to VE/VO2 (SEM = 21.5 watts, CV = 7.4%, R=.811) and RR (SEM = 35.3 watts, CV = 12.2%, R = 0.800). From the results of this study, it is concluded that the RR method is a valid and reliable method for detecting AT. However, due to the relatively high SEM and CV, and low R, when compared to VE and VE/VO2, its insensitivity to small changes seen in highly fit athletes would preclude its use in measuring changes in AT. It appears that either VE or VE/VO2 would be appropriate for measuring AT changes in highly fit athletes. Key PointsRespiratory rate is a valid and reliable marker of the anaerobic threshold.Due to a relatively high standard error of measurement and coefficient of variability for the respiratory rate method, use of ventilation (VE) and ventilatory equivalent for oxygen (VE/VO2 is preferred when assessing changes in anaerobic threshold.When assessing changes in maximal aerobic capacity, maximal watts has a lower standard error of measurement and coefficient of variability and is preferred over changes in maximal oxygen consumption.", "title": "" }, { "docid": "3342e2f79a6bb555797224ac4738e768", "text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.", "title": "" }, { "docid": "4902543ee50d7a16a6adedf1aa0981b0", "text": "Deep Neural Networks have become a state of the art approach in perception processing like speech recognition, image processing and natural language processing. Many state of the art benchmarks for these algorithms are using deep learning techniques. The deep neural networks in today's applications need to process very large amount of data. Different approaches have been proposed to solve scaling these algorithms. Few approach look for providing a solution over existing big data processing platform which usually runs over a large scale commodity cpu cluster. As training deep learning workload require many small computations to be done and large communication to pass the data between layers, General Purpose GPUs seems to the best platforms to train these networks. Different approaches have been proposed to scale processing on cluster of GPU servers. We have summarized various approaches used in this regard.", "title": "" }, { "docid": "79827b8ad761a1a14ea7370cf89579d8", "text": "Once a rarely used subset of medical treatments, protein therapeutics have increased dramatically in number and frequency of use since the introduction of the first recombinant protein therapeutic — human insulin — 25 years ago. Protein therapeutics already have a significant role in almost every field of medicine, but this role is still only in its infancy. This article overviews some of the key characteristics of protein therapeutics, summarizes the more than 130 protein therapeutics used currently and suggests a new classification of these proteins according to their pharmacological action.", "title": "" } ]
scidocsrr
67de148933627c8e81fb3f64fa1c4ebd
TRANSFORMER-XL: LANGUAGE MODELING
[ { "docid": "87a4e88a41ede7edfac027f898a39651", "text": "We introduce a general and simple structural design called “Multiplicative Integration” (MI) to improve recurrent neural networks (RNNs). MI changes the way in which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters. The new structure can be easily embedded into many popular RNN models, including LSTMs and GRUs. We empirically analyze its learning behaviour and conduct evaluations on several tasks using different RNN models. Our experimental results demonstrate that Multiplicative Integration can provide a substantial performance boost over many of the existing RNN models.", "title": "" }, { "docid": "39568ad13dd4ed58180b42e323996574", "text": "Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when the long-term memory is required. This paper introduces a simple, yet powerful modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of RNN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving two tasks: audio signal generation and TIMIT spoken word classification, where it outperforms both RNN and LSTM networks.", "title": "" }, { "docid": "5a0da0bad12a1f0e9a5a2a272519c49e", "text": "Recurrent neural networks have been very successful at pred icting sequences of words in tasks such as language modeling. However, all such m odels are based on the conventional classification framework, where model is t rained against one-hot targets, and each word is represented both as an input and as a output in isolation. This causes inefficiencies in learning both in terms of utili zing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learn ing in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the numbe r of trainable variables. Our LSTM model lowers the state of the art word-level perplex ity on the Penn Treebank to 68.5.", "title": "" } ]
[ { "docid": "2827b6387ef2cc6e668b69f44364f00b", "text": "BITCOIN is a novel decentralized cryptocurrency system which has recently received a great attention from a wider audience. An interesting and unique feature of this system is that the complete list of all the transactions occurred from its inception is publicly available. This enables the investigation of funds movements to uncover interesting properties of the BITCOIN economy. In this paper we present a set of analyses of the user graph, i.e. the graph obtained by an heuristic clustering of the graph of BITCOIN transactions. Our analyses consider an up-to-date BITCOIN blockchain, as in December 2015, after the exponential explosion of the number of transactions occurred in the last two years. The set of analyses we defined includes, among others, the analysis of the time evolution of BITCOIN network, the verification of the \"rich get richer\" conjecture and the detection of the nodes which are critical for the network connectivity.", "title": "" }, { "docid": "068386a089895bed3a7aebf2d1a7b35d", "text": "The purpose of this prospective study was to assess the efficacy of the Gertzbein classification and the Load Shearing classification in the conservative treatment of thoracolumbar burst spinal fractures. From 1997 to 1999, 30 consecutive patients with single-level thoracolumbar spinal injury with no neurological impairment were classified according to the Gertzbein classification and the Load Shearing scoring, and were treated conservatively. A custom-made thoracolumbosacral orthosis was worn in all patients for 6 months and several radiologic parameters were evaluated, while the Denis Pain and Work Scale were used to assess the clinical outcome. The average follow-up period was 24 months (range 12–39 months). During this period radiograms showed no improvement of any radiologic parameter. However, the clinical outcome was satisfactory in 28 of 30 patients with neither pseudarthrosis, nor any complications recorded on completion of treatment. This study showed that thoracolumbar burst fractures Gertzbein A3 with a load shearing score 6 or less can be successfully treated conservatively. Patient selection is a fundamental component in clinical success for these classification systems. Cette étude a pour objectif de classer les fractures comminutives du segment thoraco-lombaire de la colonne vertébrale qui ont été traitées de manière conservatrice, conformément à la classification de Gertzbein et à la classification de la répartition des contraintes. Depuis 1997 à 1999, trente malades présentant une fracture comminutive dans le segment thoraco-lombaire de la colonne vertébrale, sans dommages neurologiques, ont été traités de manière conservatoire, conformément aux classifications de Gertzbein et à la notation de la répartition des charges. Les patients ont porté une orthèse thoraco-lombaire pendant 6 mois et on a procédé à une évaluation des paramètres radiographiques. L'échelle de la douleur et du travail de Dennis a été utilisée pour évaluer les résultats. La durée moyenne d'observation des malades a été de 24 mois (de 12 à 39 mois). Bien que les paramètres radiologiques, pendant cette période, n'aient manifesté aucune amélioration, le résultat clinique de ces patients a été satisfaisant pour 93.33% d' entre eux. L'on n'a pas constaté de complications ni de pseudarthroses. La classification de Gertzbein associe le type de fracture au degré d'instabilité mécanique et au dommage neurologique. La classification de la répartition des contraintes relie l'écrasement et le déplacement de la fracture à la stabilité mécanique. Les fractures explosives du segment lombaire de la colonne vertébrale de type A3, selon Gertzbein, degré 6 ou inférieur à 6, selon la classification des contraintes, peuvent être traitées avec succès de manière conservatrice. Le choix judicieux des patients est important pour le succès clinique de cette méthode de classification.", "title": "" }, { "docid": "cd48180e93d25858410222fff4b1f43e", "text": "Metaphors pervade discussions of social issues like climate change, the economy, and crime. We ask how natural language metaphors shape the way people reason about such social issues. In previous work, we showed that describing crime metaphorically as a beast or a virus, led people to generate different solutions to a city's crime problem. In the current series of studies, instead of asking people to generate a solution on their own, we provided them with a selection of possible solutions and asked them to choose the best ones. We found that metaphors influenced people's reasoning even when they had a set of options available to compare and select among. These findings suggest that metaphors can influence not just what solution comes to mind first, but also which solution people think is best, even when given the opportunity to explicitly compare alternatives. Further, we tested whether participants were aware of the metaphor. We found that very few participants thought the metaphor played an important part in their decision. Further, participants who had no explicit memory of the metaphor were just as much affected by the metaphor as participants who were able to remember the metaphorical frame. These findings suggest that metaphors can act covertly in reasoning. Finally, we examined the role of political affiliation on reasoning about crime. The results confirm our previous findings that Republicans are more likely to generate enforcement and punishment solutions for dealing with crime, and are less swayed by metaphor than are Democrats or Independents.", "title": "" }, { "docid": "7a6873110b5976db2ec0936b9e5c6001", "text": "This paper addresses the problem of turn on performances of an insulated gate bipolar transistor (IGBT) that works in hard switching conditions. The IGBT turn on dynamics with an inductive load is described, and corresponding IGBT turn on losses and reverse recovery current of the associated freewheeling diode are analysed. A new IGBT gate driver based on feed-forward control of the gate emitter voltage is presented in the paper. In contrast to the widely used conventional gate drivers, which have no capability for switching dynamics optimisation, the proposed gate driver provides robust and simple control and optimization of the reverse recovery current and turn on losses. The collector current slope and reverse recovery current are controlled by means of the gate emitter voltage control in feed-forward manner. In addition the collector emitter voltage slope is controlled during the voltage falling phase by means of inherent increase of the gate current. Therefore, the collector emitter voltage tail and the total turn on losses are significantly reduced. The proposed gate driver was experimentally verified and compared to a conventional gate driver, and the results are presented and discussed in the paper.", "title": "" }, { "docid": "bf69f41014c6086d22ad96c1a368a0e7", "text": "This paper presents the first photometric registration pipeline for Mixed Reality based on high quality illumination estimation using convolutional neural networks (CNNs). For easy adaptation and deployment of the system, we train the CNNs using purely synthetic images and apply them to real image data. To keep the pipeline accurate and efficient, we propose to fuse the light estimation results from multiple CNN instances and show an approach for caching estimates over time. For optimal performance, we furthermore explore multiple strategies for the CNN training. Experimental results show that the proposed method yields highly accurate estimates for photo-realistic augmentations.", "title": "" }, { "docid": "3f09b82a9a9be064819c1d7b402b0031", "text": "Academic dishonesty is widespread within secondary and higher education. It can include unethical academic behaviors such as cheating, plagiarism, or unauthorized help. Researchers have investigated a number of individual and contextual factors in an effort to understand the phenomenon. In the last decade, there has been increasing interest in the role personality plays in explaining unethical academic behaviors. We used meta-analysis to estimate the relationship between each of the Big Five personality factors and academic dishonesty. Previous reviews have highlighted the role of neuroticism and extraversion as potential predictors of cheating behavior. However, our results indicate that conscientiousness and agreeableness are the strongest Big Five predictors, with both factors negatively related to academic dishonesty. We discuss the implications of our findings for both research and practice. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8933d92ec139e80ffb8f0ebaa909d76c", "text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.", "title": "" }, { "docid": "17ccae5f98711c8698f0fb4a449a591f", "text": "Blind image deconvolution: theory and applications Images are ubiquitous and indispensable in science and everyday life. Mirroring the abilities of our own human visual system, it is natural to display observations of the world in graphical form. Images are obtained in areas 1 2 Blind Image Deconvolution: problem formulation and existing approaches ranging from everyday photography to astronomy, remote sensing, medical imaging, and microscopy. In each case, there is an underlying object or scene we wish to observe; the original or true image is the ideal representation of the observed scene. Yet the observation process is never perfect: there is uncertainty in the measurements , occurring as blur, noise, and other degradations in the recorded images. Digital image restoration aims to recover an estimate of the original image from the degraded observations. The key to being able to solve this ill-posed inverse problem is proper incorporation of prior knowledge about the original image into the restoration process. Classical image restoration seeks an estimate of the true image assuming the blur is known. In contrast, blind image restoration tackles the much more difficult, but realistic, problem where the degradation is unknown. In general, the degradation is nonlinear (including, for example, saturation and quantization) and spatially varying (non uniform motion, imperfect optics); however, for most of the work, it is assumed that the observed image is the output of a Linear Spatially Invariant (LSI) system to which noise is added. Therefore it becomes a Blind Deconvolution (BD) problem, with the unknown blur represented as a Point Spread Function (PSF). Classical restoration has matured since its inception, in the context of space exploration in the 1960s, and numerous techniques can be found in the literature (for recent reviews see [1, 2]). These differ primarily in the prior information about the image they include to perform the restoration task. The earliest algorithms to tackle the BD problem appeared as long ago as the mid-1970s [3, 4], and attempted to identify known patterns in the blur; a small but dedicated effort followed through the late 1980s (see for instance [5, 6, 7, 8, 9]), and a resurgence was seen in the 1990s (see the earlier reviews in [10, 11]). Since then, the area has been extensively explored by the signal processing , astronomical, and optics communities. Many of the BD algorithms have their roots in estimation theory, linear algebra, and numerical analysis. An important question …", "title": "" }, { "docid": "dff0752eace9db08e25904a844533338", "text": "The authors investigated whether accuracy in identifying deception from demeanor in high-stake lies is specific to those lies or generalizes to other high-stake lies. In Experiment 1, 48 observers judged whether 2 different groups of men were telling lies about a mock theft (crime scenario) or about their opinion (opinion scenario). The authors found that observers' accuracy in judging deception in the crime scenario was positively correlated with their accuracy in judging deception in the opinion scenario. Experiment 2 replicated the results of Experiment 1, as well as P. Ekman and M. O'Sullivan's (1991) finding of a positive correlation between the ability to detect deceit and the ability to identify micromomentary facial expressions of emotion. These results show that the ability to detect high-stake lies generalizes across high-stake situations and is most likely due to the presence of emotional clues that betray deception in high-stake lies.", "title": "" }, { "docid": "e2f8ecd3b325a3f067e53e9beb087919", "text": "This paper presents a seven-dimensional ordinary differential equation modelling the transmission of Plasmodium falciparum malaria between humans and mosquitoes with non-linear forces of infection in form of saturated incidence rates. These incidence rates produce antibodies in response to the presence of parasite-causing malaria in both human and mosquito populations.The existence of region where the model is epidemiologically feasible is established. Stability analysis of the disease-free equilibrium is investigated via the threshold parameter (reproduction number R0) obtained using the next generation matrix technique. The model results show that the disease-free equilibrium is asymptotically stable at threshold parameter less than unity and unstable at threshold parameter greater than unity. The existence of the unique endemic equilibrium is also determined under certain conditions. Numerical simulations are carried out to confirm the analytic results and explore the possible behavior of the formulated model. AMS Subject Classification: 92B05, 93A30", "title": "" }, { "docid": "c3bf8153bfcb0d430d1189153de6242c", "text": "Sentiment analysis is one of the key challenges for mining online user generated content. In this work, we focus on customer reviews which are an important form of opinionated content. The goal is to identify each sentence’s semantic orientation (e.g. positive or negative) of a review. Traditional sentiment classification methods often involve substantial human efforts, e.g. lexicon construction, feature engineering. In recent years, deep learning has emerged as an effective means for solving sentiment classification problems. A neural network intrinsically learns a useful representation automatically without human efforts. However, the success of deep learning highly relies on the availability of large-scale training data. In this paper, we propose a novel deep learning framework for review sentiment classification which employs prevalently available ratings as weak supervision signals. The framework consists of two steps: (1) learn a high level representation (embedding space) which captures the general sentiment distribution of sentences through rating information; (2) add a classification layer on top of the embedding layer and use labeled sentences for supervised fine-tuning. Experiments on review data obtained from Amazon show the efficacy of our method and its superiority over baseline methods.", "title": "" }, { "docid": "9e5ea2211fda032877c68de406b6cf44", "text": "Two-dimensional crystals are emerging materials for nanoelectronics. Development of the field requires candidate systems with both a high carrier mobility and, in contrast to graphene, a sufficiently large electronic bandgap. Here we present a detailed theoretical investigation of the atomic and electronic structure of few-layer black phosphorus (BP) to predict its electrical and optical properties. This system has a direct bandgap, tunable from 1.51 eV for a monolayer to 0.59 eV for a five-layer sample. We predict that the mobilities are hole-dominated, rather high and highly anisotropic. The monolayer is exceptional in having an extremely high hole mobility (of order 10,000 cm(2) V(-1) s(-1)) and anomalous elastic properties which reverse the anisotropy. Light absorption spectra indicate linear dichroism between perpendicular in-plane directions, which allows optical determination of the crystalline orientation and optical activation of the anisotropic transport properties. These results make few-layer BP a promising candidate for future electronics.", "title": "" }, { "docid": "c47a0d644a0e9637aecff0ad1270732a", "text": "This paper presents the results obtained with the implementation of a series of learning activities based on Mobile Serious Games (MSGs) for the development of problem solving and collaborative skills in Chilean 8th grade students. Three MSGs were developed and played by teams of four students in order to solve problems collaboratively. A quasi-experimental design was used. The data shows that the experimental group achieved a higher perception of their own collaboration skills and a higher score in the plan execution dimension of the problem solving cycle than did the non-equivalent control group, revealing that MSG-based learning activities may contribute to such learning improvements. This challenges future research to identify under which conditions learning activities based on mobile serious games can promote the development of higher order skills. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cbba6c341bd0440874d6a882c944a60a", "text": "Mining software repositories at the source code level can provide a greater understanding of how software evolves. We present a tool for quickly comparing the source code of different versions of a C program. The approach is based on partial abstract syntax tree matching, and can track simple changes to global variables, types and functions. These changes can characterize aspects of software evolution useful for answering higher level questions. In particular, we consider how they could be used to inform the design of a dynamic software updating system. We report results based on measurements of various versions of popular open source programs. including BIND, OpenSSH, Apache, Vsftpd and the Linux kernel.", "title": "" }, { "docid": "7e44e32b6e19a884f12b2f4b337909ca", "text": "Many computational problems can be solved by multiple algorithms, with different algorithms fastest for different problem sizes, input distributions, and hardware characteristics. We consider the problem ofalgorithm selection: dynamically choose an algorithm to attack an instance of a problem with the goal of minimizing the overall execution time. We formulate the problem as a kind of Markov decision process (MDP), and use ideas from reinforcement learning to solve it. This paper introduces a kind of MDP that models the algorithm selection problem by allowing multiple state transitions. The well known Q-learning algorithm is adapted for this case in a way that combines both Monte-Carlo and Temporal Difference methods. Also, this work uses, and extends in a way to control problems, the Least-Squares Temporal Difference algorithm (LSTD ) of Boyan. The experimental study focuses on the classic problems of order statistic selection and sorting. The encouraging results reveal the potential of applying learning methods to traditional computational problems.", "title": "" }, { "docid": "e830098f9c045d376177e6d2644d4a06", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "c17014a87540d935deec23346846bf86", "text": "BACKGROUND\nThe rapid development of genome sequencing technology allows researchers to access large genome datasets. However, outsourcing the data processing o the cloud poses high risks for personal privacy. The aim of this paper is to give a practical solution for this problem using homomorphic encryption. In our approach, all the computations can be performed in an untrusted cloud without requiring the decryption key or any interaction with the data owner, which preserves the privacy of genome data.\n\n\nMETHODS\nWe present evaluation algorithms for secure computation of the minor allele frequencies and χ2 statistic in a genome-wide association studies setting. We also describe how to privately compute the Hamming distance and approximate Edit distance between encrypted DNA sequences. Finally, we compare performance details of using two practical homomorphic encryption schemes--the BGV scheme by Gentry, Halevi and Smart and the YASHE scheme by Bos, Lauter, Loftus and Naehrig.\n\n\nRESULTS\nThe approach with the YASHE scheme analyzes data from 400 people within about 2 seconds and picks a variant associated with disease from 311 spots. For another task, using the BGV scheme, it took about 65 seconds to securely compute the approximate Edit distance for DNA sequences of size 5K and figure out the differences between them.\n\n\nCONCLUSIONS\nThe performance numbers for BGV are better than YASHE when homomorphically evaluating deep circuits (like the Hamming distance algorithm or approximate Edit distance algorithm). On the other hand, it is more efficient to use the YASHE scheme for a low-degree computation, such as minor allele frequencies or χ2 test statistic in a case-control study.", "title": "" }, { "docid": "babc8964627101b1cccfb5bd5acd36be", "text": "Knowledge Management (KM) is a diffuse and controversial term, which has been used by a large number of research disciplines. CSCW, over the last 20 years, has taken a critical stance towards most of these approaches, and instead, CSCW shifted the focus towards a practice-based perspective. This paper surveys CSCW researchers’ viewpoints on what has become called ‘knowledge sharing’ and ‘expertise sharing’. These are based in an understanding of the social contexts of knowledge work and practices, as well as in an emphasis on communication among knowledgeable humans. The paper provides a summary and overview of the two strands of knowledge and expertise sharing in CSCW, which, from an analytical standpoint, roughly represent ‘generations’ of research: an ‘object-centric’ and a ‘people-centric’ view. We also survey the challenges and opportunities ahead.", "title": "" }, { "docid": "78276f95c0080200585b89221a94f5ed", "text": "Skeletal muscle damaged by injury or by degenerative diseases such as muscular dystrophy is able to regenerate new muscle fibers. Regeneration mainly depends upon satellite cells, myogenic progenitors localized between the basal lamina and the muscle fiber membrane. However, other cell types outside the basal lamina, such as pericytes, also have myogenic potency. Here, we discuss the main properties of satellite cells and other myogenic progenitors as well as recent efforts to obtain myogenic cells from pluripotent stem cells for patient-tailored cell therapy. Clinical trials utilizing these cells to treat muscular dystrophies, heart failure, and stress urinary incontinence are also briefly outlined.", "title": "" }, { "docid": "ae7fb63bb4a70aa508fab8500e451402", "text": "Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.", "title": "" } ]
scidocsrr
98cf5164b97b7bce619b253d33fc31a4
DecoBrush: drawing structured decorative patterns by example
[ { "docid": "0bd720d912575c0810c65d04f6b1712b", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" } ]
[ { "docid": "fe536bcb97b9cb905f68f2f8f0d7ae4e", "text": "Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f -divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning. 1", "title": "" }, { "docid": "98b2f0b348116f1207cb4dd53622d51c", "text": "Measuring the performance of solar energy and heat transfer systems requires a lot of time, economic cost and manpower. Meanwhile, directly predicting their performance is challenging due to the complicated internal structures. Fortunately, a knowledge-based machine learning method can provide a promising prediction and optimization strategy for the performance of energy systems. In this Chapter, the authors will show how they utilize the machine learning models trained from a large experimental database to perform precise prediction and optimization on a solar water heater (SWH) system. A new energy system optimization strategy based on a high-throughput screening (HTS) process is proposed. This Chapter consists of: i) Comparative studies on varieties of machine learning models (artificial neural networks (ANNs), support vector machine (SVM) and extreme learning machine (ELM)) to predict the performances of SWHs; ii) Development of an ANN-based software to assist the quick prediction and iii) Introduction of a computational HTS method to design a high-performance SWH system.", "title": "" }, { "docid": "04ff9fe1984fded27d638fe2552adf79", "text": "While social networks can provide an ideal platform for upto-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users’ past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors’ profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.", "title": "" }, { "docid": "a497d0e4de19d5660deb54b6dee42ebc", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews within German SMEs and with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors which influence the success from larger ERP projects the most. For SMEs, factors like Organizational fit of the ERP system as well as ERP system tests are even more important than Top management support or Project management, which were the most important factors for large-scaled companies.", "title": "" }, { "docid": "121a497fa8d2e8e3d84140f267169d1a", "text": "Deep convolutional neural network (DCNN) based supervised learning is a widely practiced approach for large-scale image classification. However, retraining these large networks to accommodate new, previously unseen data demands high computational time and energy requirements. Also, previously seen training samples may not be available at the time of retraining. We propose an efficient training methodology and incrementally growing DCNN to allow new classes to be learned while sharing part of the base network. Our proposed methodology is inspired by transfer learning techniques, although it does not forget previously learned classes. An updated network for learning new set of classes is formed using previously learned convolutional layers (shared from initial part of base network) with addition of few newly added convolutional kernels included in the later layers of the network. We evaluated the proposed scheme on several recognition applications. The classification accuracy achieved by our approach is comparable to the regular incremental learning approach (where networks are updated with new training samples only, without any network sharing), while achieving energy efficiency, reduction in storage requirements, memory access and training time.", "title": "" }, { "docid": "bf9d706685f76877a56d323423b32a5c", "text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.", "title": "" }, { "docid": "116463e16452d6847c94f662a90ac2ef", "text": "The ubiquity of mobile devices with global positioning functionality (e.g., GPS and AGPS) and Internet connectivity (e.g., 3G andWi-Fi) has resulted in widespread development of location-based services (LBS). Typical examples of LBS include local business search, e-marketing, social networking, and automotive traffic monitoring. Although LBS provide valuable services for mobile users, revealing their private locations to potentially untrusted LBS service providers pose privacy concerns. In general, there are two types of LBS, namely, snapshot and continuous LBS. For snapshot LBS, a mobile user only needs to report its current location to a service provider once to get its desired information. On the other hand, a mobile user has to report its location to a service provider in a periodic or on-demand manner to obtain its desired continuous LBS. Protecting user location privacy for continuous LBS is more challenging than snapshot LBS because adversaries may use the spatial and temporal correlations in the user's location samples to infer the user's location information with higher certainty. Such user location trajectories are also very important for many applications, e.g., business analysis, city planning, and intelligent transportation. However, publishing such location trajectories to the public or a third party for data analysis could pose serious privacy concerns. Privacy protection in continuous LBS and trajectory data publication has increasingly drawn attention from the research community and industry. In this survey, we give an overview of the state-of-the-art privacy-preserving techniques in these two problems.", "title": "" }, { "docid": "1465aa476fe6313f15009bed69546a7d", "text": "The skyline operator and its variants such as dynamic skyline and reverse skyline operators have attracted considerable attention recently due to their broad applications. However, computations of such operators are challenging today since there is an increasing trend of applications to deal with big data. For such data-intensive applications, the MapReduce framework has been widely used recently. In this paper, we propose efficient parallel algorithms for processing the skyline and its variants using MapReduce. We first build histograms to effectively prune out non-skyline (non-reverse skyline) points in advance. We next partition data based on the regions divided by the histograms and compute candidate (reverse) skyline points for each region independently using MapReduce. Finally, we check whether each candidate point is actually a (reverse) skyline point in every region independently. Our performance study confirms the effectiveness and scalability of the proposed algorithms.", "title": "" }, { "docid": "746b9e9e1fdacc76d3acb4f78d824901", "text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.", "title": "" }, { "docid": "75c5a3f0d57a6a39868b28685d92d7b5", "text": "The complexity of the healthcare system is increasing, and the moral duty to provide quality patient care is threatened by the sky rocketing cost of healthcare. A major concern for both patients and the hospital’s economic bottom line are hospital-acquired infections (HAIs), including central line associated blood stream infections (CLABSIs). These often serious infections result in significantly increased patient morbidity, mortality, length of stay, and use of health care resources. Historically, most infection prevention and control measures have focused on aseptic technique of health care providers and in managing the environment. Emerging evidence for the role of host decontamination in preventing HAIs is shifting the paradigm and paving a new path for novel infection prevention interventions. Chlorhexidine gluconate has a long-standing track record of being a safe and effective product with broad antiseptic activity, and little evidence of emerging resistance. As the attention is directed toward control and prevention of HAIs, chlorhexidine-containing products may prove to be a vital tool in infection control. Increasing rates of multidrug-resistant organisms (MDROs), including methicillinresistant Staphylococcus aureus (MRSA), Acinetobacter baumanniic and vancomycin-resistant Enterococcus (VRE) demand that evidence-based research drive all interventions to prevent transmission of these organisms and the development of HAIs. This review of literature examines current evidence related to daily chlorhexidine gluconate bathing and its impact on CLABSI rates in the adult critically ill patient population.", "title": "" }, { "docid": "aeadbf476331a67bec51d5d6fb6cc80b", "text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance", "title": "" }, { "docid": "cb10555298064ce053c5b02a938bc281", "text": "The increasing demand for energy in the near future has created strong motivation for environmentally clean alternative energy resources. Microbial fuel cells (MFCs) have opened up new ways of utilizing renewable energy sources. MFCs are devices that convert the chemical energy in the organic compounds to electrical energy through microbial catalysis at the anode under anaerobic conditions, and the reduction of a terminal electron acceptor, most preferentially oxygen, at the cathode. Due to the rapid advances in MFC-based technology over the last decade, the currently achievable MFC power production has increased by several orders of magnitude, and niche applications have been extended into a variety of areas. Newly emerging concepts with alternative materials for electrodes and catalysts as well as innovative designs have made MFCs promising technologies. Aerobic bacteria can also be used as cathode catalysts. This is an encouraging finding because not only biofouling on the cathode is unavoidable in the prolonged-run MFCs but also noble catalysts can be substituted with aerobic bacteria. This article discusses some of the recent advances in MFCs with an emphasis on the performance, materials, microbial community structures and applications beyond electricity generation.", "title": "" }, { "docid": "513455013ecb2f4368566ba30cdb8d7f", "text": "Many modern multi-core processors sport a large shared cache with the primary goal of enhancing the statistic performance of computing workloads. However, due to resulting cache interference among tasks, the uncontrolled use of such a shared cache can significantly hamper the predictability and analyzability of multi-core real-time systems. Software cache partitioning has been considered as an attractive approach to address this issue because it does not require any hardware support beyond that available on many modern processors. However, the state-of-the-art software cache partitioning techniques face two challenges: (1) the memory co-partitioning problem, which results in page swapping or waste of memory, and (2) the availability of a limited number of cache partitions, which causes degraded performance. These are major impediments to the practical adoption of software cache partitioning. In this paper, we propose a practical OS-level cache management scheme for multi-core real-time systems. Our scheme provides predictable cache performance, addresses the aforementioned problems of existing software cache partitioning, and efficiently allocates cache partitions to schedule a given task set. We have implemented and evaluated our scheme in Linux/RK running on the Intel Core i7 quad-core processor. Experimental results indicate that, compared to the traditional approaches, our scheme is up to 39% more memory space efficient and consumes up to 25% less cache partitions while maintaining cache predictability. Our scheme also yields a significant utilization benefit that increases with the number of tasks.", "title": "" }, { "docid": "8af844944f6edee4c271d73a552dc073", "text": "Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.", "title": "" }, { "docid": "220e4d6e207ea14beaa1526383fbaccb", "text": "A millimeter-wave sinusoidally modulated (SM) leaky-wave antenna (LWA) based on inset dielectric waveguide (IDW) is presented in this paper. The proposed antenna, radiating at 10° from broadside at 60 GHz, consists of a SM IDW, a rectangular waveguide for excitation and a transition for impedance matching. Fundamental TE01 mode is excited by the IDW with the leaky wave generated by the SM inset groove depth. The electric field is normal to the metallic waveguide wall and thus reduces the conductor loss. As a proof of concept, the modulated dielectric inset as well as the dielectric transition are conveniently fabricated by 3-D printing (tan δ = 0.02). Measurements of the antenna prototype show that the main beam can be scanned from -9° to 40° in a frequency range from 50 to 85 GHz within a gain variation between 9.1 and 14.2 dBi. Meanwhile, the reflection coefficient |S11| is kept below -13.4 dB over the whole frequency band. The measured results agree reasonably well with simulations. Furthermore, the gain of the proposed antenna can be enhanced by extending its length and using low-loss dielectric materials such as Teflon (tan δ <; 0.002).", "title": "" }, { "docid": "5ed955ddaaf09fc61c214adba6b18449", "text": "This study investigates how customers perceive and adopt Internet Banking (IB) in Hong Kong. We developed a theoretical model based on the Technology Acceptance Model (TAM) with an added construct Perceived Web Security, and empirically tested its ability in predicting customers’ behavioral intention of adopting IB. We designed a questionnaire and used it to survey a randomly selected sample of customers of IB from the Yellow Pages, and obtained 203 usable responses. We analyzed the data using Structured Equation Modeling (SEM) to evaluate the strength of the hypothesized relationships, if any, among the constructs, which include Perceived Ease of Use and Perceived Web Security as independent variables, Perceived Usefulness and Attitude as intervening variables, and Intention to Use as the dependent variable. The results provide support of the extended TAM model and confirm its robustness in predicting customers’ intention of adoption of IB. This study contributes to the literature by formulating and validating TAM to predict IB adoption, and its findings provide useful information for bank management in formulating IB marketing strategies.", "title": "" }, { "docid": "97e2d66e927c0592b88bef38a8899547", "text": "Shared services have been heralded as a means of enhancing services and improving the efficiency of their delivery. As such they have been embraced by the private, and increasingly, the public sectors. Yet implementation has proved to be difficult and the number of success stories has been limited. Which factors are critical to success in the development of shared services arrangements is not yet well understood. The current paper examines existing research in the area of critical success factors (CSFs) and suggests that there are actually three distinct types of CSF: outcome, implementation process and operating environment characteristic. Two case studies of public sector shared services in Australia and the Netherlands are examined through a lens that both incorporates all three types of CSF and distinguishes between them.", "title": "" }, { "docid": "fee96195e50e7418b5d63f8e6bd07907", "text": "Optimal power flow (OPF) is considered for microgrids, with the objective of minimizing either the power distribution losses, or, the cost of power drawn from the substation and supplied by distributed generation (DG) units, while effecting voltage regulation. The microgrid is unbalanced, due to unequal loads in each phase and non-equilateral conductor spacings on the distribution lines. Similar to OPF formulations for balanced systems, the considered OPF problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a convex problem solvable in polynomial-time complexity. Enticingly, numerical tests demonstrate the ability of the proposed method to attain the globally optimal solution of the original nonconvex OPF. To ensure scalability with respect to the number of nodes, robustness to isolated communication outages, and data privacy and integrity, the proposed SDP is solved in a distributed fashion by resorting to the alternating direction method of multipliers. The resulting algorithm entails iterative message-passing among groups of consumers and guarantees faster convergence compared to competing alternatives.", "title": "" }, { "docid": "1b18b2b05e6fe19060039cd02ddb6131", "text": "Objective assessment of image quality is fundamentally important in many image processing tasks. In this paper, we focus on learning blind image quality assessment (BIQA) models, which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here, we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms the state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL inferred quality index achieves an additional performance gain.", "title": "" } ]
scidocsrr
5d0519d839bb94fba476a80d3af2ca57
The Wikipedia Corpus
[ { "docid": "7242400e9d0043b74e5baa931ccb83ed", "text": "The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an \"edit\" button. The open nature of the Wikipedia has been key to its success, but has also created a challenge: how can readers develop an informed opinion on its reliability? We propose a system that computes quantitative values of trust for the text in Wikipedia articles; these trust values provide an indication of text reliability.\n The system uses as input the revision history of each article, as well as information about the reputation of the contributing authors, as provided by a reputation system. The trust of a word in an article is computed on the basis of the reputation of the original author of the word, as well as the reputation of all authors who edited text near the word. The algorithm computes word trust values that vary smoothly across the text; the trust values can be visualized using varying text-background colors. The algorithm ensures that all changes to an article's text are reflected in the trust values, preventing surreptitious content changes.\n We have implemented the proposed system, and we have used it to compute and display the trust of the text of thousands of articles of the English Wikipedia. To validate our trust-computation algorithms, we show that text labeled as low-trust has a significantly higher probability of being edited in the future than text labeled as high-trust.", "title": "" } ]
[ { "docid": "d071c70b85b10a62538d73c7272f5d99", "text": "The Amaryllidaceae alkaloids represent a large (over 300 alkaloids have been isolated) and still expanding group of biogenetically related isoquinoline alkaloids that are found exclusively in plants belonging to this family. In spite of their great variety of pharmacological and/or biological properties, only galanthamine is used therapeutically. First isolated from Galanthus species, this alkaloid is a long-acting, selective, reversible and competitive inhibitor of acetylcholinesterase, and is used for the treatment of Alzheimer’s disease. Other Amaryllidaceae alkaloids of pharmacological interest will also be described in this chapter.", "title": "" }, { "docid": "1c9055e6f484b6f4225400d3bdc16320", "text": "Transactions with strong consistency and high availability simplify building and reasoning about distributed systems. However, previous implementations performed poorly. This forced system designers to avoid transactions completely, to weaken consistency guarantees, or to provide single-machine transactions that require programmers to partition their data. In this paper, we show that there is no need to compromise in modern data centers. We show that a main memory distributed computing platform called FaRM can provide distributed transactions with strict serializability, high performance, durability, and high availability. FaRM achieves a peak throughput of 140 million TATP transactions per second on 90 machines with a 4.9 TB database, and it recovers from a failure in less than 50 ms. Key to achieving these results was the design of new transaction, replication, and recovery protocols from first principles to leverage commodity networks with RDMA and a new, inexpensive approach to providing non-volatile DRAM.", "title": "" }, { "docid": "68fa8199b92bf8280856138f13c5456a", "text": "To enhance the resolution and accuracy of depth data, some video-based depth super-resolution methods have been proposed, which utilizes its neighboring depth images in the temporal domain. They often consist of two main stages: motion compensation of temporally neighboring depth images and fusion of compensated depth images. However, large displacement 3D motion often leads to compensation error, and the compensation error is further introduced into the fusion. A video-based depth super-resolution method with novel motion compensation and fusion approaches is proposed in this paper. We claim that 3D nearest neighboring field (NNF) is a better choice than using positions with true motion displacement for depth enhancements. To handle large displacement 3D motion, the compensation stage utilized 3D NNF instead of true motion used in the previous methods. Next, the fusion approach is modeled as a regression problem to predict the super-resolution result efficiently for each depth image by using its compensated depth images. A new deep convolutional neural network architecture is designed for fusion, which is able to employ a large amount of video data for learning the complicated regression function. We comprehensively evaluate our method on various RGB-D video sequences to show its superior performance.", "title": "" }, { "docid": "5ca1c503cba0db452d0e5969e678db97", "text": "Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.", "title": "" }, { "docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2", "text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.", "title": "" }, { "docid": "8aaaa2b1410522afe5dd604af1140ec2", "text": "This paper provides a pragmatic approach to analysing qualitative data, using actual data from a qualitative dental public health study for demonstration purposes. The paper also critically explores how computers can be used to facilitate this process, the debate about the verification (validation) of qualitative analyses and how to write up and present qualitative research studies.", "title": "" }, { "docid": "bb433cd5b65b166ed27fb12cb5b72a86", "text": "What constitutes learning in the 21st century will be contested terrain as our society strives toward post-industrial forms of knowledge acquisition and production without having yet overcome the educational contradictions and failings of the industrial age. Educational reformers suggest that the advent of new technologies will radically transform what people learn, how they learn, and where they learn, yet studies of diverse learners’ use of new media cast doubt on the speed and extent of change. Drawing on recent empirical and theoretical work, this essay critically examines beliefs about the nature of digital learning and points to the role of social, culture, and economic factors in shaping and constraining educational transformation in the digital era.", "title": "" }, { "docid": "9692ab0e46c6e370aeb171d3224f5d23", "text": "With the advent technology of Remote Sensing (RS) and Geographic Information Systems (GIS), a network transportation (Road) analysis within this environment has now become a common practice in many application areas. But a main problem in the network transportation analysis is the less quality and insufficient maintenance policies. This is because of the lack of funds for infrastructure. This demand for information requires new approaches in which data related to transportation network can be identified, collected, stored, retrieved, managed, analyzed, communicated and presented, for the decision support system of the organization. The adoption of newly emerging technologies such as Geographic Information System (GIS) can help to improve the decision making process in this area for better use of the available limited funds. The paper reviews the applications of GIS technology for transportation network analysis.", "title": "" }, { "docid": "cf0a4f12c23b42c08b6404fe897ed646", "text": "By performing computation at the location of data, non-Von Neumann (VN) computing should provide power and speed benefits over conventional (e.g., VN-based) approaches to data-centric workloads such as deep learning. For the on-chip training of largescale deep neural networks using nonvolatile memory (NVM) based synapses, success will require performance levels (e.g., deep neural network classification accuracies) that are competitive with conventional approaches despite the inherent imperfections of such NVM devices, and will also require massively parallel yet low-power read and write access. In this paper, we focus on the latter requirement, and outline the engineering tradeoffs in performing parallel reads and writes to large arrays of NVM devices to implement this acceleration through what is, at least locally, analog computing. We address how the circuit requirements for this new neuromorphic computing approach are somewhat reminiscent of, yet significantly different from, the well-known requirements found in conventional memory applications. We discuss tradeoffs that can influence both the effective acceleration factor (“speed”) and power requirements of such on-chip learning accelerators. P. Narayanan A. Fumarola L. L. Sanches K. Hosokawa S. C. Lewis R. M. Shelby G. W. Burr", "title": "" }, { "docid": "52b1c306355e6bf8ba10ea7e3cf1d05e", "text": "QUESTION\nIs there a means of assessing research impact beyond citation analysis?\n\n\nSETTING\nThe case study took place at the Washington University School of Medicine Becker Medical Library.\n\n\nMETHOD\nThis case study analyzed the research study process to identify indicators beyond citation count that demonstrate research impact.\n\n\nMAIN RESULTS\nThe authors discovered a number of indicators that can be documented for assessment of research impact, as well as resources to locate evidence of impact. As a result of the project, the authors developed a model for assessment of research impact, the Becker Medical Library Model for Assessment of Research.\n\n\nCONCLUSION\nAssessment of research impact using traditional citation analysis alone is not a sufficient tool for assessing the impact of research findings, and it is not predictive of subsequent clinical applications resulting in meaningful health outcomes. The Becker Model can be used by both researchers and librarians to document research impact to supplement citation analysis.", "title": "" }, { "docid": "ff3a9ba87c71a83455d0580a79f9901d", "text": "Transfer learning, which allows a source task to affect the inductive bias of the target task, is widely used in computer vision. The typical way of conducting transfer learning with deep neural networks is to fine-tune a model pretrained on the source task using data from the target task. In this paper, we propose an adaptive fine-tuning approach, called SpotTune, which finds the optimal fine-tuning strategy per instance for the target data. In SpotTune, given an image from the target task, a policy network is used to make routing decisions on whether to pass the image through the fine-tuned layers or the pre-trained layers. We conduct extensive experiments to demonstrate the effectiveness of the proposed approach. Our method outperforms the traditional fine-tuning approach on 12 out of 14 standard datasets. We also compare SpotTune with other stateof-the-art fine-tuning strategies, showing superior performance. On the Visual Decathlon datasets, our method achieves the highest score across the board without bells and whistles.", "title": "" }, { "docid": "aa362363d6e4b48f7d0b50b02f35a8a2", "text": "In this paper, we mainly adopt the voting combination method to implement the incremental learning for SVM. The incremental learning algorithm proposed by this paper has contained two parts in order to tackle different types of incremental learning cases, the first part is to deal with the on-line incremental learning and the second part is to deal with the batch incremental learning. In the final, we make the experiment to verify the validity and efficiency of such algorithm.", "title": "" }, { "docid": "7fb55495eedca648f8d03227b790a1bd", "text": "Dental erosion is increasing, and only recently are clinicians starting to acknowledge the problem. A prospective clinical trial investigating which therapeutic approach must be undertaken to treat erosion and when is under way at the University of Geneva (Geneva Erosion Study). All patients affected by dental erosion who present with signs of dentin exposure are immediately treated using only adhesive techniques. In this article, the full-mouth adhesive rehabilitation of one of these patients affected by severe dental erosion (ACE class IV) is illustrated. By the end of the therapy, a very pleasing esthetic outcome had been achieved (esthetic success), all of the patient's teeth maintained their vitality, and the amount of tooth structure sacrificed to complete the adhesive full-mouth rehabilitation was negligible (biological success).", "title": "" }, { "docid": "98729fc6a6b95222e6a6a12aa9a7ded7", "text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.", "title": "" }, { "docid": "6f709e89edaa619f41335b1a06eb713a", "text": "Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.", "title": "" }, { "docid": "aa2ddbfc3bb1aa854d1c576927dc2d30", "text": "B-scan ultrasound provides a non-invasive low-cost imaging solution to primary care diagnostics. The inherent speckle noise in the images produced by this technique introduces uncertainty in the representation of their textural characteristics. To cope with the uncertainty, we propose a novel fuzzy feature extraction method to encode local texture. The proposed method extends the Local Binary Pattern (LBP) approach by incorporating fuzzy logic in the representation of local patterns of texture in ultrasound images. Fuzzification allows a Fuzzy Local Binary Pattern (FLBP) to contribute to more than a single bin in the distribution of the LBP values used as a feature vector. The proposed FLBP approach was experimentally evaluated for supervised classification of nodular and normal samples from thyroid ultrasound images. The results validate its effectiveness over LBP and other common feature extraction methods.", "title": "" }, { "docid": "8d61cbb3df2ea134fa1252d5eff29597", "text": "Recovering 3D full-body human pose is a challenging problem with many applications. It has been successfully addressed by motion capture systems with body worn markers and multiple cameras. In this paper, we address the more challenging case of not only using a single camera but also not leveraging markers: going directly from 2D appearance to 3D geometry. Deep learning approaches have shown remarkable abilities to discriminatively learn 2D appearance features. The missing piece is how to integrate 2D, 3D, and temporal information to recover 3D geometry and account for the uncertainties arising from the discriminative model. We introduce a novel approach that treats 2D joint locations as latent variables whose uncertainty distributions are given by a deep fully convolutional neural network. The unknown 3D poses are modeled by a sparse representation and the 3D parameter estimates are realized via an Expectation-Maximization algorithm, where it is shown that the 2D joint location uncertainties can be conveniently marginalized out during inference. Extensive evaluation on benchmark datasets shows that the proposed approach achieves greater accuracy over state-of-the-art baselines. Notably, the proposed approach does not require synchronized 2D-3D data for training and is applicable to “in-the-wild” images, which is demonstrated with the MPII dataset.", "title": "" }, { "docid": "c072806e70d688b75a895249af518cfb", "text": "Modern vehicles have increasing amounts of data streaming continuously on-board their controller area networks. These data are primarily used for controlling the vehicle and for feedback to the driver, but they can also be exploited to detect faults and predict failures. The traditional diagnostics paradigm, which relies heavily on human expert knowledge, scales poorly with the increasing amounts of data generated by highly digitised systems. The next generation of equipment monitoring and maintenance prediction solutions will therefore require a different approach, where systems can build up knowledge (semi-)autonomously and learn over the lifetime of the equipment. A key feature in such systems is the ability to capture and encode characteristics of signals, or groups of signals, on-board vehicles using different models. Methods that do this robustly and reliably can be used to describe and compare the operation of the vehicle to previous time periods or to other similar vehicles. In this paper two models for doing this, for a single signal, are presented and compared on a case of on-road failures caused by air compressor faults in city buses. One approach is based on histograms and the other is based on echo state networks. It is shown that both methods are sensitive to the expected changes in the signal’s characteristics and work well on simulated data. However, the histogram model, despite being simpler, handles the deviations in real data better than the echo state network.", "title": "" }, { "docid": "c5c4f4cab75bc6f997803212ee8d30a2", "text": "The privacy and integrity of tenant's data highly rely on the infrastructure of multi-tenant cloud being secure. However, with both hardware and software being controlled by potentially curious or even malicious cloud operators, it is no surprise to see frequent reports of data leakages or abuses in cloud. Unfortunately, most prior solutions require intrusive changes to the cloud platform and none can protect a VM against adversaries controlling the physical machine. This paper analyzes the challenges of transparent VM protection against sophisticated adversaries controlling the whole software and hardware stack. Based on the analysis, this paper proposes HyperCoffer, a hardware-software framework that guards the privacy and integrity of tenant's VMs. HyperCoffer only trusts the processor chip and makes no security assumption on external memory and devices. Hyper-Coffer extends existing processor virtualization with memory encryption and integrity checking to secure data communication with off-chip memory. Unlike prior hardware-based approaches, HyperCoffer retains transparency with existing virtual machines (i.e., operating systems) and requires very few changes to the (untrusted) hypervisor. HyperCoffer introduces a mechanism called VM-Shim that runs in-between a guest VM and the hypervisor. Each VM-Shim instance for a VM runs in a separate protected context and only declassifies necessary information designated by the VM to the hypervisor and external environments (e.g., through NICs). We have implemented a prototype of HyperCoffer in a QEMU-based full-system emulator and the VM-Shim mechanism in a real machine. Performance measurement using trace-based simulation and on a real hardware platform shows that the performance overhead is small (ranging from 0.6% to 13.9% on simulated platform and 0.3% to 6.8% on real hardware for the VM-Shim mechanism).", "title": "" } ]
scidocsrr
6f26496ee241776cd3e3065f6dabc5ec
ATOMO: Communication-efficient Learning via Atomic Sparsification
[ { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" }, { "docid": "6e9e687db8f202a8fa6d49c5996e7141", "text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.", "title": "" }, { "docid": "f2334ce1d717a8f6e91771f95a00b46e", "text": "High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {−1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn’t incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available 1.", "title": "" } ]
[ { "docid": "66610cf27a67760f6625e2fe4bbc7783", "text": "UNLABELLED\nYale Image Finder (YIF) is a publicly accessible search engine featuring a new way of retrieving biomedical images and associated papers based on the text carried inside the images. Image queries can also be issued against the image caption, as well as words in the associated paper abstract and title. A typical search scenario using YIF is as follows: a user provides few search keywords and the most relevant images are returned and presented in the form of thumbnails. Users can click on the image of interest to retrieve the high resolution image. In addition, the search engine will provide two types of related images: those that appear in the same paper, and those from other papers with similar image content. Retrieved images link back to their source papers, allowing users to find related papers starting with an image of interest. Currently, YIF has indexed over 140 000 images from over 34 000 open access biomedical journal papers.\n\n\nAVAILABILITY\nhttp://krauthammerlab.med.yale.edu/imagefinder/", "title": "" }, { "docid": "4bca13cc04fc128844ecc48c0357b974", "text": "From its roots in physics, mathematics, and biology, the study of complexity science, or complex adaptive systems, has expanded into the domain of organizations and systems of organizations. Complexity science is useful for studying the evolution of complex organizations -entities with multiple, diverse, interconnected elements. Evolution of complex organizations often is accompanied by feedback effects, nonlinearity, and other conditions that add to the complexity of existing organizations and the unpredictability of the emergence of new entities. Health care organizations are an ideal setting for the application of complexity science due to the diversity of organizational forms and interactions among organizations that are evolving. Too, complexity science can benefit from attention to the world’s most complex human organizations. Organizations within and across the health care sector are increasingly interdependent. Not only are new, highly powerful and diverse organizational forms being created, but also the restructuring has occurred within very short periods of time. In this chapter, we review the basic tenets of complexity science. We identify a series of key differences between the complexity science and established theoretical approaches to studying health organizations, based on the ways in which time, space, and constructs are framed. The contrasting perspectives are demonstrated using two case examples drawn from healthcare innovation and healthcare integrated systems research. Complexity science broadens and deepens the scope of inquiry into health care organizations, expands corresponding methods of research, and increases the ability of theory to generate valid research on complex organizational forms. Formatted", "title": "" }, { "docid": "7dc54a5750832bc503e77d2893466979", "text": "Functional logic programming languages combine the most important declarative programming paradigms, and attempts to combine these paradigms have a long history. The declarative multi-paradigm language Curry is influenced by recent advances in the foundations and implementation of functional logic languages. The development of Curry is an international initiative intended to provide a common platform for the research, teaching, and application of integrated functional logic languages. This paper surveys the foundations of functional logic programming that are relevant for Curry, the main features of Curry, and extensions and applications of Curry and functional logic programming.", "title": "" }, { "docid": "f7c4b71b970b7527cd2650ce1e05ab1b", "text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.", "title": "" }, { "docid": "4de49becf20e255d56e7faaa158ddf07", "text": "Given a fixed budget and an arbitrary cost for selecting each node, the budgeted influence maximization (BIM) problem concerns selecting a set of seed nodes to disseminate some information that maximizes the total number of nodes influenced (termed as influence spread) in social networks at a total cost no more than the budget. Our proposed seed selection algorithm for the BIM problem guarantees an approximation ratio of (1-1/√e). The seed selection algorithm needs to calculate the influence spread of candidate seed sets, which is known to be #P-complex. Identifying the linkage between the computation of marginal probabilities in Bayesian networks and the influence spread, we devise efficient heuristic algorithms for the latter problem. Experiments using both large-scale social networks and synthetically generated networks demonstrate superior performance of the proposed algorithm with moderate computation costs. Moreover, synthetic datasets allow us to vary the network parameters and gain important insights on the impact of graph structures on the performance of different algorithms.", "title": "" }, { "docid": "14e2eecc36a1c08600598eb65678f99f", "text": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.", "title": "" }, { "docid": "0ec7a27ed4d89909887b08c5ea823756", "text": "Brain responses to pain, assessed through positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are reviewed. Functional activation of brain regions are thought to be reflected by increases in the regional cerebral blood flow (rCBF) in PET studies, and in the blood oxygen level dependent (BOLD) signal in fMRI. rCBF increases to noxious stimuli are almost constantly observed in second somatic (SII) and insular regions, and in the anterior cingulate cortex (ACC), and with slightly less consistency in the contralateral thalamus and the primary somatic area (SI). Activation of the lateral thalamus, SI, SII and insula are thought to be related to the sensory-discriminative aspects of pain processing. SI is activated in roughly half of the studies, and the probability of obtaining SI activation appears related to the total amount of body surface stimulated (spatial summation) and probably also by temporal summation and attention to the stimulus. In a number of studies, the thalamic response was bilateral, probably reflecting generalised arousal in reaction to pain. ACC does not seem to be involved in coding stimulus intensity or location but appears to participate in both the affective and attentional concomitants of pain sensation, as well as in response selection. ACC subdivisions activated by painful stimuli partially overlap those activated in orienting and target detection tasks, but are distinct from those activated in tests involving sustained attention (Stroop, etc.). In addition to ACC, increased blood flow in the posterior parietal and prefrontal cortices is thought to reflect attentional and memory networks activated by noxious stimulation. Less noted but frequent activation concerns motor-related areas such as the striatum, cerebellum and supplementary motor area, as well as regions involved in pain control such as the periaqueductal grey. In patients, chronic spontaneous pain is associated with decreased resting rCBF in contralateral thalamus, which may be reverted by analgesic procedures. Abnormal pain evoked by innocuous stimuli (allodynia) has been associated with amplification of the thalamic, insular and SII responses, concomitant to a paradoxical CBF decrease in ACC. It is argued that imaging studies of allodynia should be encouraged in order to understand central reorganisations leading to abnormal cortical pain processing. A number of brain areas activated by acute pain, particularly the thalamus and anterior cingulate, also show increases in rCBF during analgesic procedures. Taken together, these data suggest that hemodynamic responses to pain reflect simultaneously the sensory, cognitive and affective dimensions of pain, and that the same structure may both respond to pain and participate in pain control. The precise biochemical nature of these mechanisms remains to be investigated.", "title": "" }, { "docid": "d30343a3a888139eb239c6605ccb0f41", "text": "Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.", "title": "" }, { "docid": "a47be4595c576782dbae4e6113797f61", "text": "This research draws on team adaptation theory to study how agile information systems development (ISD) teams respond to non-routine events in their work environment. Based on our findings from a qualitative case study of three ISD teams, we identified non-routine events that could be distinguished according to the three categories task volatility, technological disruption, and team instability. In addition, we found three patterns of reacting to these events that differed regarding complexity and team learning. Our results show that the theoretical link between different types of events and adaption patterns depends on the type of event and the reach of the events’ impact as well as on the extent to which the teams followed an iterative development approach. While previous literature either examined ISD team agility as the extent to which agile techniques and methods are applied, or as a capability to adapt to changes, this research is the first to study how more or less agile teams react to non-routine events. By taking a process view and examining the influence of iterativeness on the link between events and adaptation patterns, this study helps reconcile the behavioral and capability perspectives on agility that have so far been disconnected.", "title": "" }, { "docid": "36a694668a10bc0475f447adb1e09757", "text": "Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.", "title": "" }, { "docid": "08fa4b75c63dfce57c4d9cdcee6882d9", "text": "Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects.", "title": "" }, { "docid": "4ad106897a19830c80a40e059428f039", "text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated", "title": "" }, { "docid": "a44910968b9b5fcfcfd29c520d2afe1d", "text": "In [Luke and Spector 1997] we presented a comprehensive suite of data comparing GP crossover and point mutation over four domains and a wide range of parameter settings. Unfortunately, the results were marred by statistical flaws. This revision of the study eliminates these flaws, with three times as much the data as the original experiments had. Our results again show that crossover does have some advantage over mutation given the right parameter settings (primarily larger population sizes), though the difference between the two surprisingly small. Further, the results are complex, suggesting that the big picture is more complicated than is commonly believed.", "title": "" }, { "docid": "f36ef9dd6b78605683f67b382b9639ac", "text": "Stable clones of neural stem cells (NSCs) have been isolated from the human fetal telencephalon. These self-renewing clones give rise to all fundamental neural lineages in vitro. Following transplantation into germinal zones of the newborn mouse brain they participate in aspects of normal development, including migration along established migratory pathways to disseminated central nervous system regions, differentiation into multiple developmentally and regionally appropriate cell types, and nondisruptive interspersion with host progenitors and their progeny. These human NSCs can be genetically engineered and are capable of expressing foreign transgenes in vivo. Supporting their gene therapy potential, secretory products from NSCs can correct a prototypical genetic metabolic defect in neurons and glia in vitro. The human NSCs can also replace specific deficient neuronal populations. Cryopreservable human NSCs may be propagated by both epigenetic and genetic means that are comparably safe and effective. By analogy to rodent NSCs, these observations may allow the development of NSC transplantation for a range of disorders.", "title": "" }, { "docid": "93c9ffa6c83de5fece14eb351315fbed", "text": "nature protocols | VOL.7 NO.11 | 2012 | 1983 IntroDuctIon In a typical histology study, it is necessary to make thin sections of blocks of frozen or fixed tissue for microscopy. This process has major limitations for obtaining a 3D picture of structural components and the distribution of cells within tissues. For example, in axon regeneration studies, after labeling the injured axons, it is common that the tissue of interest (e.g., spinal cord, optic nerve) is sectioned. Subsequently, when tissue sections are analyzed under the microscope, only short fragments of axons are observed within each section; hence, the 3D information of axonal structures is lost. Because of this confusion, these fragmented axonal profiles might be interpreted as regenerated axons even though they could be spared axons1. In addition, the growth trajectories and target regions of the regenerating axons cannot be identified by visualization of axonal fragments. Similar problems could occur in cancer and immunology studies when only small fractions of target cells are observed within large organs. To avoid these limitations and problems, tissues ideally should be imaged at high spatial resolution without sectioning. However, optical imaging of thick tissues is limited mostly because of scattering of imaging light through the thick tissues, which contain various cellular and extracellular structures with different refractive indices. The imaging light traveling through different structures scatters and loses its excitation and emission efficiency, resulting in a lower resolution and imaging depth2,3. Optical clearing of tissues by organic solvents, which make the biological tissue transparent by matching the refractory indexes of different tissue layers to the solvent, has become a prominent method for imaging thick tissues2,4. In cleared tissues, the imaging light does not scatter and travels unobstructed throughout the different tissue layers. For this purpose, the first tissue clearing method was developed about a century ago by Spalteholz, who used a mixture of benzyl alcohol and methyl salicylate to clear large organs such as the heart5,6. In general, the first step of tissue clearing is tissue dehydration, owing to the low refractive index of water compared with cellular structures containing proteins and lipids4. Subsequently, dehydrated tissue is impregnated with an optical clearing agent, such as glucose7, glycerol8, benzyl alcohol–benzyl benzoate (BABB, also known as Murray’s clear)4,9–13 or dibenzyl ether (DBE)13,14, which have approximately the same refractive index as the impregnated tissue. At the end of the clearing procedure, the cleared tissue hardens and turns transparent, and thus resembles glass.", "title": "" }, { "docid": "b189ae4140663c4e170b7fc579ce0e98", "text": "Modern optical systems increasingly rely on DSP techniques for data transmission at 40Gbs and recently at 100Gbs and above. A significant challenge towards CMOS TX DSP SoC integration is due to requirements for four 6b DACs (Fig. 10.8.1) to operate at 56Gs/s with low power and small footprint. To date, the highest sampling rate of 43Gs/s 6b DAC is reported in SiGe BiCMOS process [1]. CMOS DAC implementations are constraint to 12Gs/s with the output signal frequency limited to 1.5GHz [2–4]. This paper demonstrates more than one order of magnitude improvement in 6b CMOS DAC design with a test circuit operating at 56Gs/s, achieving SFDR >30dBc and ENOB>4.3b up to the output frequency of 26.9GHz. Total power dissipation is less than 750mW and the core DAC die area is less than 0.6×0.4 mm2.", "title": "" }, { "docid": "26e66162b4c7481e9f46e7524a5dfbda", "text": "Network intrusion detection systems are considered as one of the basic entities widely utilized and studied in the field of network security that aim to detect any hostile intrusion within a given network. Among many network intrusion detection systems (NIDS), open source systems have gained substantial preference due to their flexibility, support and cost effectiveness. Snort, an open source system is considered as the de-facto standard for NIDS. In this paper, effort has been made to gauge Snort in terms of performance (packet handling) and detection accuracy against TCP Flooding Distributed Denial of Service attack. The evaluation has been done using a sophisticated test-bench under different hardware configurations. This paper has analyzed the major factors affecting the performance and detection capability of Snort and has recommended techniques to make Snort a better intrusion detection system (IDS). Experimental results have shown significant improvement in Snort packet handling capability by using better hardware. However; Snort detection capability is not improved by improving hardware and is dependent upon its internal architecture (signature database and rate filtration). Furthermore, the findings can be applied to other signature based intrusion detection systems for refining their performance and detection capability.", "title": "" }, { "docid": "f772d3bbec3d92669ff28b616d7a0bde", "text": "This paper reports on the preliminary results of an ongoing study examining the teaching of new primary school topics based on Computational Thinking in New Zealand. We analyse detailed feedback from 13 teachers participating in the study, who had little or no previous experience teaching Computer Science or related topics. From this we extract key themes identified by the teachers that are likely to be encountered when deploying a new curriculum, including unexpected opportunities for cross-curricula learning, development of students' social skills, and engaging a wide range of students. From here we articulate key concepts and issues that arise in the primary school context, based on feedback during professional development for the study, and direct feedback from teachers on the experience of delivering the new material in the classroom.", "title": "" }, { "docid": "bdffbc914108cb74c4130345e568e543", "text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques ofmachine vision are extensively applied to agricultural science,and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured bydigital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants.Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.", "title": "" } ]
scidocsrr
4c9139619ba05a86d8beda7546ad5772
Latent Max-Margin Multitask Learning With Skelets for 3-D Action Recognition
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "c474df285da8106b211dc7fe62733423", "text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.", "title": "" }, { "docid": "6aaee9f90e64755c0b8b1306972df748", "text": "Combining information from various data sources has become an important research topic in machine learning with many scientific applications. Most previous studies employ kernels or graphs to integrate different types of features, which routinely assume one weight for one type of features. However, for many problems, the importance of features in one source to an individual cluster of data can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms. The proposed multi-view learning framework allows us not only to perform clustering tasks, but also to deal with classification tasks by an extension when the labeling knowledge is available. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergence. We applied our new data fusion method to five broadly used multi-view data sets for both clustering and classification. In all experimental results, our method clearly outperforms other related state-of-the-art methods.", "title": "" } ]
[ { "docid": "507cddc2df8ab2775395efb8387dad93", "text": "A novel band-reject element for the design of inline waveguide pseudoelliptic band-reject filters is introduced. The element consists of an offset partial-height post in a rectangular waveguide in which the dominant TE10 mode is propagating. The location of the attenuation pole is primarily determined by the height of the post that generates it. The element allows the implementation of weak, as well as strong coupling coefficients that are encountered in asymmetric band-reject responses with broad stopbands. The coupling strength is controlled by the offset of the post with respect to the center of the main waveguide. The posts are separated by uniform sections of the main waveguide. An equivalent low-pass circuit based on the extracted pole technique is first used in a preliminary design. An improved equivalent low-pass circuit that includes a more accurate equivalent circuit of the band-reject element is then introduced. A synthesis method of the enhanced network is also presented. Filters based on the introduced element are designed, fabricated, and tested. Good agreement between measured and simulated results is achieved", "title": "" }, { "docid": "07a42e7b4c5bc8088e9ff9b57c46f5fb", "text": "In this paper, the concept of divergent component of motion (DCM, also called “Capture Point”) is extended to 3-D. We introduce the “Enhanced Centroidal Moment Pivot point” (eCMP) and the “Virtual Repellent Point” (VRP), which allow for the encoding of both direction and magnitude of the external forces and the total force (i.e., external plus gravitational forces) acting on the robot. Based on eCMP, VRP, and DCM, we present methods for real-time planning and tracking control of DCM trajectories in 3-D. The basic DCM trajectory generator is extended to produce continuous leg force profiles and to facilitate the use of toe-off motion during double support. The robustness of the proposed control framework is thoroughly examined, and its capabilities are verified both in simulations and experiments.", "title": "" }, { "docid": "5dd8a03ed05440ca1f42c2e2920069a1", "text": "This paper introduces the capacitive bulk acoustic wave (BAW) silicon disk gyroscope. The capacitive BAW disk gyroscopes operate in the frequency range of 2-8MHz, are stationary devices with vibration amplitudes less than 20nm, and achieve very high quality factors (Q) in low vacuum (and even in atmosphere), which simplifies their wafer-scale packaging. The device has lower operating voltages compared to low-frequency gyroscopes, which simplifies the interface circuit design and implementation in standard CMOS", "title": "" }, { "docid": "43e8f35e57149d1441d8e75fa754549d", "text": "Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant.", "title": "" }, { "docid": "cb1048d4bffb141074a4011279054724", "text": "Question Generation (QG) is the task of generating reasonable questions from a text. It is a relatively new research topic and has its potential usage in intelligent tutoring systems and closed-domain question answering systems. Current approaches include template or syntax based methods. This thesis proposes a novel approach based entirely on semantics. Minimal Recursion Semantics (MRS) is a meta-level semantic representation with emphasis on scope underspecification. With the English Resource Grammar and various tools from the DELPH-IN community, a natural language sentence can be interpreted as an MRS structure by parsing, and an MRS structure can be realized as a natural language sentence through generation. There are three issues emerging from semantics-based QG: (1) sentence simplification for complex sentences, (2) question transformation for declarative sentences, and (3) generation ranking. Three solutions are also proposed: (1) MRS decomposition through a Connected Dependency MRS Graph, (2) MRS transformation from declarative sentences to interrogative sentences, and (3) question ranking by simple language models atop a MaxEnt-based model. The evaluation is conducted in the context of the Question Generation Shared Task and Generation Challenge 2010. The performance of proposed method is compared against other syntax and rule based systems. The result also reveals the challenges of current research on question generation and indicates direction for future work.", "title": "" }, { "docid": "97c5b202cdc1f7d8220bf83663a0668f", "text": "Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.", "title": "" }, { "docid": "17fde1b7ed30db50790192ea03de2dd1", "text": "Parsing for clothes in images and videos is a critical step towards understanding the human appearance. In this work, we propose a method to segment clothes in settings where there is no restriction on number and type of clothes, pose of the person, viewing angle, occlusion and number of people. This is a challenging task as clothes, even of the same category, have large variations in color and texture. The presence of human joints is the best indicator for cloth types as most of the clothes are consistently worn around the joints. We incorporate the human joint prior by estimating the body joint distributions using the detectors and learning the cloth-joint co-occurrences of different cloth types with respect to body joints. The cloth-joint and cloth-cloth co-occurrences are used as a part of the conditional random field framework to segment the image into different clothing. Our results indicate that we have outperformed the recent attempt [16] on H3D [3], a fairly complex dataset.", "title": "" }, { "docid": "85c687f7b01d635fa9f46d0dd61098d3", "text": "This paper provides a comprehensive survey of the technical achievements in the research area of Image Retrieval , especially Content-Based Image Retrieval, an area so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multi-dimensional indexing, and system design, three of the fundamental bases of Content-Based Image Retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identiied, and future promising research directions are suggested.", "title": "" }, { "docid": "2c7bfe8b2694f9c478a08baf2790e72f", "text": "Energy management means to optimize one of the most complex and important technical creations that we know: the energy system. While there is plenty of experience in optimizing energy generation and distribution, it is the demand side that receives increasing attention by research and industry. Demand Side Management (DSM) is a portfolio of measures to improve the energy system at the side of consumption. It ranges from improving energy efficiency by using better materials, over smart energy tariffs with incentives for certain consumption patterns, up to sophisticated real-time control of distributed energy resources. This paper gives an overview and a taxonomy for DSM, analyzes the various types of DSM, and gives an outlook on the latest demonstration projects in this domain.", "title": "" }, { "docid": "5ad4b3c5905b7b716a806432b755e60b", "text": "The formation of both germline cysts and the germinal epithelium is described during the ovary development in Cyprinus carpio. As in the undifferentiated gonad of mammals, cords of PGCs become oogonia when they are surrounded by somatic cells. Ovarian differentiation is triggered when oogonia proliferate and enter meiosis, becoming oocytes. Proliferation of single oogonium results in clusters of interconnected oocytes, the germline cysts, that are encompassed by somatic prefollicle cells and form cell nests. Both PGCs and cell nests are delimited by a basement membrane. Ovarian follicles originate from the germline cysts, about the time of meiotic arrest, as prefollicle cells surround oocytes, individualizing them. They synthesize a basement membrane and an oocyte forms a follicle. With the formation of the stroma, unspecialized mesenchymal cells differentiate, and encompass each follicle, forming the theca. The follicle, basement membrane, and theca constitute the follicle complex. Along the ventral region of the differentiating ovary, the epithelium invaginates to form the ovigerous lamellae whose developing surface epithelium, the germinal epithelium, is composed of epithelial cells, germline cysts with oogonia, oocytes, and developing follicles. The germinal epithelium rests upon a basement membrane. The follicles complexes are connected to the germinal epithelium by a shared portion of basement membrane. In the differentiated ovary, germ cell proliferation in the epithelium forms nests in which there are the germline cysts. Germline cysts, groups of cells that form from a single founder cell and are joined by intercellular bridges, are conserved throughout the vertebrates, as is the germinal epithelium.", "title": "" }, { "docid": "6d60f0cd26681db25f322d77cadfdd34", "text": "Over the last years, several authors have signaled that state of the art categorization methods fail to perform well when trained and tested on data from different databases. The general consensus in the literature is that this issue, known as domain adaptation and/or dataset bias, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. The large majority of these works use BOW feature descriptors, and learning methods based on image-to-image distance functions. Following the seminal work of [6], in this paper we challenge these two assumptions. We experimentally show that using the NBNN classifier over existing domain adaptation databases achieves always very strong performances. We build on this result, and present an NBNN-based domain adaptation algorithm that learns iteratively a class metric while inducing, for each sample, a large margin separation among classes. To the best of our knowledge, this is the first work casting the domain adaptation problem within the NBNN framework. Experiments show that our method achieves the state of the art, both in the unsupervised and semi-supervised settings.", "title": "" }, { "docid": "3dfb419706ae85d232753a085dc145f7", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "4122d900e0f527d4e9ed1005a68b95bf", "text": "We present a method that learns to tell rear signals from a number of frames using a deep learning framework. The proposed framework extracts spatial features with a convolution neural network (CNN), and then applies a long short term memory (LSTM) network to learn the long-term dependencies. The brake signal classifier is trained using RGB frames, while the turn signal is recognized via a two-step localization approach. The two separate classifiers are learned to recognize the static brake signals and the dynamic turn signals. As a result, our recognition system can recognize 8 different rear signals via the combined two classifiers in real-world traffic scenes. Experimental results show that our method is able to obtain more accurate predictions than using only the CNN to classify rear signals with time sequence inputs.", "title": "" }, { "docid": "66ba9c32c29e905a018aab3a25733fd1", "text": "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societal goals for equality of representation in this space.", "title": "" }, { "docid": "57e50c15b3107a473f5fb74472b74fcc", "text": "PURPOSE\nThe purpose of this article is to provide an overview of our previous work on roll-over shapes, which are the effective rocker shapes that the lower limb systems conform to during walking.\n\n\nMETHOD\nThis article is a summary of several recently published articles from the Northwestern University Prosthetics Research Laboratory and Rehabilitation Engineering Research Program on the topic of roll-over shapes. The roll-over shape is a measurement of centre of pressure of the ground reaction force in body-based coordinates. This measurement is interpreted as the effective rocker shape created by lower limb systems during walking.\n\n\nRESULTS\nOur studies have shown that roll-over shapes in able-bodied subjects do not change appreciably for conditions of level ground walking, including walking at different speeds, while carrying different amounts of weight, while wearing shoes of different heel heights, or when wearing shoes with different rocker radii. In fact, results suggest that able-bodied humans will actively change their ankle movements to maintain the same roll-over shapes.\n\n\nCONCLUSIONS\nThe consistency of the roll-over shapes to level surface walking conditions has provided insight for design, alignment and evaluation of lower limb prostheses and orthoses. Changes to ankle-foot and knee-ankle-foot roll-over shapes for ramp walking conditions have suggested biomimetic (i.e. mimicking biology) strategies for adaptable ankle-foot prostheses and orthoses.", "title": "" }, { "docid": "0830abcb23d763c1298bf4605f81eb72", "text": "A key technical challenge in performing 6D object pose estimation from RGB-D image is to fully leverage the two complementary data sources. Prior works either extract information from the RGB image and depth separately or use costly post-processing steps, limiting their performances in highly cluttered scenes and real-time applications. In this work, we present DenseFusion, a generic framework for estimating 6D pose of a set of known objects from RGBD images. DenseFusion is a heterogeneous architecture that processes the two data sources individually and uses a novel dense fusion network to extract pixel-wise dense feature embedding, from which the pose is estimated. Furthermore, we integrate an end-to-end iterative pose refinement procedure that further improves the pose estimation while achieving near real-time inference. Our experiments show that our method outperforms state-of-the-art approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed method to a real robot to grasp and manipulate objects based on the estimated pose. Our code and video are available at https://sites.google.com/view/densefusion/.", "title": "" }, { "docid": "479c250bd9284ab1a216a11fa5199f61", "text": "Two Gram-stain-negative, non-motile, non-spore-forming, rod-shaped bacterial strains, designated 3B-2(T) and 10AO(T), were isolated from a sand sample collected from the west coast of the Korean peninsula by using low-nutrient media, and their taxonomic positions were investigated in a polyphasic study. The strains did not grow on marine agar. They grew optimally at 30 °C and pH 6.5-7.5. Strains 3B-2(T) and 10AO(T) shared 97.5 % 16S rRNA gene sequence similarity and mean level of DNA-DNA relatedness of 12 %. In phylogenetic trees based on 16S rRNA gene sequences, strains 3B-2(T) and 10AO(T), together with several uncultured bacterial clones, formed independent lineages within the evolutionary radiation encompassed by the phylum Bacteroidetes. Strains 3B-2(T) and 10AO(T) contained MK-7 as the predominant menaquinone and iso-C(15 : 0) and C(16 : 1)ω5c as the major fatty acids. The DNA G+C contents of strains 3B-2(T) and 10AO(T) were 42.8 and 44.6 mol%, respectively. Strains 3B-2(T) and 10AO(T) exhibited very low levels of 16S rRNA gene sequence similarity (<85.0 %) to the type strains of recognized bacterial species. These data were sufficient to support the proposal that the novel strains should be differentiated from previously known genera of the phylum Bacteroidetes. On the basis of the data presented, we suggest that strains 3B-2(T) and 10AO(T) represent two distinct novel species of a new genus, for which the names Ohtaekwangia koreensis gen. nov., sp. nov. (the type species; type strain 3B-2(T)  = KCTC 23018(T)  = CCUG 58939(T)) and Ohtaekwangia kribbensis sp. nov. (type strain 10AO(T)  = KCTC 23019(T)  = CCUG 58938(T)) are proposed.", "title": "" }, { "docid": "094906bcd076ae3207ba04755851c73a", "text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.", "title": "" }, { "docid": "807938c1343b30a1f7a55d6a8483f36e", "text": "Microglial cells are the main innate immune cells of the complex cellular structure of the brain. These cells respond quickly to pathogens and injury, accumulate in regions of degeneration and produce a wide variety of pro-inflammatory molecules. These observations have resulted in active debate regarding the exact role of microglial cells in the brain and whether they have beneficial or detrimental functions. Careful targeting of these cells could have therapeutic benefits for several types of trauma and disease specific to the central nervous system. This Review discusses the molecular details underlying the innate immune response in the brain during infection, injury and disease.", "title": "" }, { "docid": "36ebd6dd8a4fa1d69138696d21e19342", "text": "Very high dimensional learning systems become theoretical ly possible when training examples are abundant. The computing cost then becomes the limiting fact or. Any efficient learning algorithm should at least take a brief look at each example. But should a ll ex mples be given equal attention? This contribution proposes an empirical answer. We first pre sent an online SVM algorithm based on this premise. LASVM yields competitive misclassifi cation rates after a single pass over the training examples, outspeeding state-of-the-art SVM s olvers. Then we show how active example selection can yield faster training, higher accuracies , and simpler models, using only a fraction of the training example labels.", "title": "" } ]
scidocsrr
7a14585072bf49bb486bc4081003f3ba
Self-localization and control of an omni-directional mobile robot based on an omni-directional camera
[ { "docid": "662ae9d792b3889dbd0450a65259253a", "text": "We present a new parametrization for point features within monocular simultaneous localization and mapping (SLAM) that permits efficient and accurate representation of uncertainty during undelayed initialization and beyond, all within the standard extended Kalman filter (EKF). The key concept is direct parametrization of the inverse depth of features relative to the camera locations from which they were first viewed, which produces measurement equations with a high degree of linearity. Importantly, our parametrization can cope with features over a huge range of depths, even those that are so far from the camera that they present little parallax during motion---maintaining sufficient representative uncertainty that these points retain the opportunity to \"come in'' smoothly from infinity if the camera makes larger movements. Feature initialization is undelayed in the sense that even distant features are immediately used to improve camera motion estimates, acting initially as bearing references but not permanently labeled as such. The inverse depth parametrization remains well behaved for features at all stages of SLAM processing, but has the drawback in computational terms that each point is represented by a 6-D state vector as opposed to the standard three of a Euclidean XYZ representation. We show that once the depth estimate of a feature is sufficiently accurate, its representation can safely be converted to the Euclidean XYZ form, and propose a linearity index that allows automatic detection and conversion to maintain maximum efficiency---only low parallax features need be maintained in inverse depth form for long periods. We present a real-time implementation at 30 Hz, where the parametrization is validated in a fully automatic 3-D SLAM system featuring a handheld single camera with no additional sensing. Experiments show robust operation in challenging indoor and outdoor environments with a very large ranges of scene depth, varied motion, and also real time 360deg loop closing.", "title": "" } ]
[ { "docid": "7247eb6b90d23e2421c0d2500359d247", "text": "The large-scale collection and exploitation of personal information to drive targeted online advertisements has raised privacy concerns. As a step towards understanding these concerns, we study the relationship between how much information is collected and how valuable it is for advertising. We use HTTP traces consisting of millions of users to aid our study and also present the first comparative study between aggregators. We develop a simple model that captures the various parameters of today's advertising revenues, whose values are estimated via the traces. Our results show that per aggregator revenue is skewed (5% accounting for 90% of revenues), while the contribution of users to advertising revenue is much less skewed (20% accounting for 80% of revenue). Google is dominant in terms of revenue and reach (presence on 80% of publishers). We also show that if all 5% of the top users in terms of revenue were to install privacy protection, with no corresponding reaction from the publishers, then the revenue can drop by 30%.", "title": "" }, { "docid": "79295c880d24a71ed0b9e81216516311", "text": "The rapid growth in the number and diversity of Mashup services, coupled with the myriad of functionally similar Mashup services, makes it difficult to find suitable Mashup services to develop Mashup-based software applications due to an unprecedentedly large number of choices of Mashup services. Even if the existing latent factor based methods show significant improvements in Mashup service clustering and discovery, it is still challenging to find Mashup services with high accuracy due to overlooking of relationships among Mashup services. The relationships among Mashup services actually can be exploited in mining latent functional factors to improve the accuracy of clustering and discovery. In this paper, we propose a Mashup service clustering method based on an integration of service content and network via exploiting a two-level topic model. This method, firstly designs a two-level topic model to mine latent topics for representing functional features of Mashup services. Secondly, it uses two different random walk processes to derive and incorporate the topic distribution of Mashup services at service network level into the topic distribution of Mashup services at the service content level. Thirdly, K-means and Agnes algorithm are used to perform Mashup service clustering based on latent topics' similarity. Finally, we conduct a comprehensive evaluation to measure performance of our method. Compared with other existing clustering approaches, experimental results show that our approach achieves a significant improvement in terms of precision, recall, purity and entropy.", "title": "" }, { "docid": "0dba7993e502824bda56bdcf80278c26", "text": "The recent expansion of the Internet of Things (IoT) and the consequent explosion in the volume of data produced by smart devices have led to the outsourcing of data to designated data centers. However, to manage these huge data stores, centralized data centers, such as cloud storage cannot afford auspicious way. There are many challenges that must be addressed in the traditional network architecture due to the rapid growth in the diversity and number of devices connected to the internet, which is not designed to provide high availability, real-time data delivery, scalability, security, resilience, and low latency. To address these issues, this paper proposes a novel blockchain-based distributed cloud architecture with a software defined networking (SDN) enable controller fog nodes at the edge of the network to meet the required design principles. The proposed model is a distributed cloud architecture based on blockchain technology, which provides low-cost, secure, and on-demand access to the most competitive computing infrastructures in an IoT network. By creating a distributed cloud infrastructure, the proposed model enables cost-effective high-performance computing. Furthermore, to bring computing resources to the edge of the IoT network and allow low latency access to large amounts of data in a secure manner, we provide a secure distributed fog node architecture that uses SDN and blockchain techniques. Fog nodes are distributed fog computing entities that allow the deployment of fog services, and are formed by multiple computing resources at the edge of the IoT network. We evaluated the performance of our proposed architecture and compared it with the existing models using various performance measures. The results of our evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads.", "title": "" }, { "docid": "aa3767bc35b8d465aa779f36fb40319e", "text": "This paper introduces the Minnesota Intrusion Detection System (MINDS), which uses a suite of data mining techniques to automatically detect attacks against computer networks and systems. While the long-term objective of MINDS is to address all aspects of intrusion detection, in this paper we present two specific contributions. First, we present MINDS anomaly detection module that assigns a score to each connection that reflects how anomalous the connection is compared to the normal network traffic. Experimental results on live network traffic at the University of Minnesota show that our anomaly detection techniques have been successful in automatically detecting several novel intrusions that could not be identified using state-of-the-art signature-based tools such as SNORT. Many of these have been reported on the CERT/CC list of recent advisories and incident notes. We also present the results of comparing the MINDS anomaly detection module to SPADE (Statistical Packet Anomaly Detection Engine), which is designed to detect stealthy scans.", "title": "" }, { "docid": "20c1f7f19ebc7797abc8d25ebb1a7daa", "text": "Medical records contain detailed notes written by medical care providers about a patient’s physical and mental health, analysis of lab tests and radiology results, treatment courses, and more. This information may be valuable in improving medical care. In this project, we apply deep learning models to the multi-label classification task of assigning ICD-9 labels from these medical notes. Previous works have applied machine learning methods, like logistic regression and hierarchical SVM, using bag-of-words features to this task. On a dataset of around 40,000 critical care unit patients with 10 labels and with 100 labels, we find that a Recurrent Neural Network (RNN) and a RNN with Long Short-term Memory (LSTM) units show an improvement over the Binary Relevance Logistic Regression model.", "title": "" }, { "docid": "842e7c5b825669855617133b0067efc9", "text": "This research proposes a robust method for disc localization and cup segmentation that incorporates masking to avoid misclassifying areas as well as forming the structure of the cup based on edge detection. Our method has been evaluated using two fundus image datasets, namely: D-I and D-II comprising of 60 and 38 images, respectively. The proposed method of disc localization achieves an average Fscore of 0.96 and average boundary distance of 7.7 for D-I, and 0.96 and 9.1, respectively, for D-II. The cup segmentation method attains an average Fscore of 0.88 and average boundary distance of 13.8 for D-I, and 0.85 and 18.0, respectively, for D-II. The estimation errors (mean ± standard deviation) of our method for the value of vertical cup-to-disc diameter ratio against the result of the boundary by the expert of DI and D-II have similar value, namely 0.04 ± 0.04. Overall, the result of ourmethod indicates its robustness for glaucoma evaluation. B Anindita Septiarini anindita.septiarini@gmail.com Agus Harjoko aharjoko@ugm.ac.id Reza Pulungan pulungan@ugm.ac.id Retno Ekantini rekantini@ugm.ac.id 1 Department of Computer Science and Electronics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 2 Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 3 Department of Computer Science, Mulawarman University, Samarinda 75123, Indonesia", "title": "" }, { "docid": "0cafe66b71b0a7fca2b682866b0c4848", "text": "Using ultra-wideband (UWB) wireless sensors placed on a person to continuously monitor health information is a promising new application. However, there are currently no detailed models describing the UWB radio channel around the human body making it difficult to design a suitable communication system. To address this problem, we have measured radio propagation around the body in a typical indoor environment and incorporated these results into a simple model. We then implemented this model on a computer and compared experimental data with the simulation results. This paper proposes a simple statistical channel model and a practical implementation useful for evaluating UWB body area communication systems.", "title": "" }, { "docid": "bee944285ddd3e1e51e5056720a91aa0", "text": "The iterative Born approximation (IBA) is a well-known method for describing waves scattered by semitransparent objects. In this letter, we present a novel nonlinear inverse scattering method that combines IBA with an edge-preserving total variation regularizer. The proposed method is obtained by relating iterations of IBA to layers of an artificial multilayer neural network and developing a corresponding error backpropagation algorithm for efficiently estimating the permittivity of the object. Simulations illustrate that, by accounting for multiple scattering, the method successfully recovers the permittivity distribution where the traditional linear inverse scattering fails.", "title": "" }, { "docid": "ce7a903eda7fb28d1dfcf3c7f250b0ae", "text": "Long-term assessment of ambulatory behavior and joint motion are valuable tools for the evaluation of therapy effectiveness in patients with neuromuscular disorders and gait abnormalities. Even though there are several tools available to quantify ambulatory behavior in a home environment, reliable measurement of joint motion is still limited to laboratory tests. The aim of this study was to develop and evaluate a novel inertial sensor system for ambulatory behavior and joint motion measurement in the everyday environment. An algorithm for behavior classification, step detection, and knee angle calculation was developed. The validation protocol consisted of simulated daily activities in a laboratory environment. The tests were performed with ten healthy subjects and eleven patients with multiple sclerosis. Activity classification showed comparable performance to commercially available activPAL sensors. Step detection with our sensor system was more accurate. The calculated flexion-extension angle of the knee joint showed a root mean square error of less than 5° compared with results obtained using an electro-mechanical goniometer. This new system combines ambulatory behavior assessment and knee angle measurement for long-term measurement periods in a home environment. The wearable sensor system demonstrated high validity for behavior classification and knee joint angle measurement in a laboratory setting.", "title": "" }, { "docid": "54093733f08ced4d9e3a5362235bd944", "text": "Tumour-suppressor genes are indispensable for the maintenance of genomic integrity. Recently, several of these genes, including those encoding p53, PTEN, RB1 and ARF, have been implicated in immune responses and inflammatory diseases. In particular, the p53 tumour- suppressor pathway is involved in crucial aspects of tumour immunology and in homeostatic regulation of immune responses. Other studies have identified roles for p53 in various cellular processes, including metabolism and stem cell maintenance. Here, we discuss the emerging roles of p53 and other tumour-suppressor genes in tumour immunology, as well as in additional immunological settings, such as virus infection. This relatively unexplored area could yield important insights into the homeostatic control of immune cells in health and disease and facilitate the development of more effective immunotherapies. Consequently, tumour-suppressor genes are emerging as potential guardians of immune integrity.", "title": "" }, { "docid": "486a5be12690b7c48481de1819eeec28", "text": "Optimized nutrition through supplementation of diet with plant derived phytochemicals has attracted significant attention to prevent the onset of many chronic diseases including cardiovascular impairments, cancer, and metabolic disorder. These phytonutrients alone or in combination with others are believed to impart beneficial effects and play pivotal role in metabolic abnormalities such as dyslipidemia, insulin resistance, hypertension, glucose intolerance, systemic inflammation, and oxidative stress. Epidemiological and preclinical studies demonstrated that fruits, vegetables, and beverages rich in carotenoids, isoflavones, phytoestrogens, and phytosterols delay the onset of atherosclerosis or act as a chemoprotective agent by interacting with the underlying pathomechanisms. Phytochemicals exert their beneficial effects either by reducing the circulating levels of cholesterol or by inhibiting lipid oxidation, while others exhibit anti-inflammatory and antiplatelet activities. Additionally, they reduce neointimal thickening by inhibiting proliferation of smooth muscle cells and also improve endothelium dependent vasorelaxation by modulating bioavailability of nitric-oxide and voltage-gated ion channels. However, detailed and profound knowledge on specific molecular targets of each phytochemical is very important to ensure safe use of these active compounds as a therapeutic agent. Thus, this paper reviews the active antioxidative, antiproliferative, anti-inflammatory, or antiangiogenesis role of various phytochemicals for prevention of chronic diseases.", "title": "" }, { "docid": "2526915745dda9026836347292f79d12", "text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.", "title": "" }, { "docid": "23405156faf3cf650544887a85cad226", "text": "A Wilkinson power divider operating not only at one frequency f/sub 0/, but also at its first harmonic 2f/sub 0/ is presented. This power divider consists of two branches of impedance transformer, each of which consists of two sections of 1/6-wave transmission-line with different characteristic impedance. The two outputs are connected through a resistor, an inductor, and a capacitor. All the features of a conventional Wilkinson power divider, such as an equal power split, impedance matching at all ports, and a good isolation between the two output ports, can be fulfilled at f/sub 0/ and 2f/sub 0/, simultaneously.", "title": "" }, { "docid": "1bdfcf7f162bfc8c8c51a153fd4ea437", "text": "In this paper, modified image segmentation techniques were applied on MRI scan images in order to detect brain tumors. Also in this paper, a modified Probabilistic Neural Network (PNN) model that is based on learning vector quantization (LVQ) with image and data analysis and manipulation techniques is proposed to carry out an automated brain tumor classification using MRI-scans. The assessment of the modified PNN classifier performance is measured in terms of the training performance, classification accuracies and computational time. The simulation results showed that the modified PNN gives rapid and accurate classification compared with the image processing and published conventional PNN techniques. Simulation results also showed that the proposed system out performs the corresponding PNN system presented in [30], and successfully handle the process of brain tumor classification in MRI image with 100% accuracy when the spread value is equal to 1. These results also claim that the proposed LVQ-based PNN system decreases the processing time to approximately 79% compared with the conventional PNN which makes it very promising in the field of in-vivo brain tumor detection and identification. Keywords— Probabilistic Neural Network, Edge detection, image segmentation, brain tumor detection and identification", "title": "" }, { "docid": "c3838ee9c296364d2bea785556dfd2fb", "text": "Empirical validation of software metrics suites to predict fault proneness in object-oriented (OO) components is essential to ensure their practical use in industrial settings. In this paper, we empirically validate three OO metrics suites for their ability to predict software quality in terms of fault-proneness: the Chidamber and Kemerer (CK) metrics, Abreu's Metrics for Object-Oriented Design (MOOD), and Bansiya and Davis' Quality Metrics for Object-Oriented Design (QMOOD). Some CK class metrics have previously been shown to be good predictors of initial OO software quality. However, the other two suites have not been heavily validated except by their original proposers. Here, we explore the ability of these three metrics suites to predict fault-prone classes using defect data for six versions of Rhino, an open-source implementation of JavaScript written in Java. We conclude that the CK and QMOOD suites contain similar components and produce statistical models that are effective in detecting error-prone classes. We also conclude that the class components in the MOOD metrics suite are not good class fault-proneness predictors. Analyzing multivariate binary logistic regression models across six Rhino versions indicates these models may be useful in assessing quality in OO classes produced using modern highly iterative or agile software development processes.", "title": "" }, { "docid": "541c40376d8e914b626a0b0baf8c5eef", "text": "Effusion is the abnormal accumulation of fluid within a body cavity that can result from a variety of disease processes. This article reviews the normal production and resorption of body cavity fluid and the pathophysiology of abnormal fluid accumulation. In addition, classification schemes, differential diagnoses, and currently available diagnostic tests for evaluation of effusions are reviewed.", "title": "" }, { "docid": "db2553268fc3ccaddc3ec7077514655c", "text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.", "title": "" }, { "docid": "2dde5d26ab14ee6be365b23402cc13e1", "text": "Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the l1-magic algorithm proposed in the literature.", "title": "" }, { "docid": "2c4c7f8dcf1681e278183525d520fc8c", "text": "In the course of studies on the isolation of bioactive compounds from Philippine plants, the seeds of Moringa oleifera Lam. were examined and from the ethanol extract were isolated the new O-ethyl-4-(alpha-L-rhamnosyloxy)benzyl carbamate (1) together with seven known compounds, 4(alpha-L-rhamnosyloxy)-benzyl isothiocyanate (2), niazimicin (3), niazirin (4), beta-sitosterol (5), glycerol-1-(9-octadecanoate) (6), 3-O-(6'-O-oleoyl-beta-D-glucopyranosyl)-beta-sitosterol (7), and beta-sitosterol-3-O-beta-D-glucopyranoside (8). Four of the isolates (2, 3, 7, and 8), which were obtained in relatively good yields, were tested for their potential antitumor promoting activity using an in vitro assay which tested their inhibitory effects on Epstein-Barr virus-early antigen (EBV-EA) activation in Raji cells induced by the tumor promoter, 12-O-tetradecanoyl-phorbol-13-acetate (TPA). All the tested compounds showed inhibitory activity against EBV-EA activation, with compounds 2, 3 and 8 having shown very significant activities. Based on the in vitro results, niazimicin (3) was further subjected to in vivo test and found to have potent antitumor promoting activity in the two-stage carcinogenesis in mouse skin using 7,12-dimethylbenz(a)anthracene (DMBA) as initiator and TPA as tumor promoter. From these results, niazimicin (3) is proposed to be a potent chemo-preventive agent in chemical carcinogenesis.", "title": "" }, { "docid": "a59c9aa1b2f09534adf593150624aee4", "text": "Pan-sharpening is a process of acquiring a high resolution multispectral (MS) image by combining a low resolution MS image with a corresponding high resolution panchromatic (PAN) image. In this paper, we propose a new variational pan-sharpening method based on three basic assumptions: 1) the gradient of PAN image could be a linear combination of those of the pan-sharpened image bands; 2) the upsampled low resolution MS image could be a degraded form of the pan-sharpened image; and 3) the gradient in the spectrum direction of pan-sharpened image should be approximated to those of the upsampled low resolution MS image. An energy functional, whose minimizer is related to the best pan-sharpened result, is built based on these assumptions. We discuss the existence of minimizer of our energy and describe the numerical procedure based on the split Bregman algorithm. To verify the effectiveness of our method, we qualitatively and quantitatively compare it with some state-of-the-art schemes using QuickBird and IKONOS data. Particularly, we classify the existing quantitative measures into four categories and choose two representatives in each category for more reasonable quantitative evaluation. The results demonstrate the effectiveness and stability of our method in terms of the related evaluation benchmarks. Besides, the computation efficiency comparison with other variational methods also shows that our method is remarkable.", "title": "" } ]
scidocsrr
59836d99faccb59201828922dd0a55f8
Adaptive Patient-Cooperative Control of a Compliant Ankle Rehabilitation Robot (CARR) With Enhanced Training Safety
[ { "docid": "5350af2d42f9321338e63666dcd42343", "text": "Robot-aided physical therapy should encourage subject's voluntary participation to achieve rapid motor function recovery. In order to enhance subject's cooperation during training sessions, the robot should allow deviation in the prescribed path depending on the subject's modified limb motions subsequent to the disability. In the present work, an interactive training paradigm based on the impedance control was developed for a lightweight intrinsically compliant parallel ankle rehabilitation robot. The parallel ankle robot is powered by pneumatic muscle actuators (PMAs). The proposed training paradigm allows the patients to modify the robot imposed motions according to their own level of disability. The parallel robot was operated in four training modes namely position control, zero-impedance control, nonzero-impedance control with high compliance, and nonzero-impedance control with low compliance to evaluate the performance of proposed control scheme. The impedance control scheme was evaluated on 10 neurologically intact subjects. The experimental results show that an increase in robotic compliance encouraged subjects to participate more actively in the training process. This work advances the current state of the art in the compliant actuation of parallel ankle rehabilitation robots in the context of interactive training.", "title": "" } ]
[ { "docid": "ad3f3dfba0cdc514bc63f55d68fb0e2d", "text": "KDD 99 intrusion detection datasets, which are based on DARPA 98 dataset, provides labeled data for researchers working in the field of intrusion detection and is the only labeled dataset publicly available. Numerous researchers employed the datasets in KDD 99 intrusion detection competition to study the utilization of machine learning for intrusion detection and reported detection rates up to 91% with false positive rates less than 1%. To substantiate the performance of machine learning based detectors that are trained on KDD 99 training data; we investigate the relevance of each feature in KDD 99 intrusion detection datasets. To this end, information gain is employed to determine the most discriminating features for each class.", "title": "" }, { "docid": "99e3a2d4dbb1423be73adaa4e9288a94", "text": "Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying the brain effects of acquiring specialized sensorimotor skills. For example, musicians learn and repeatedly practice the association of motor actions with specific sound and visual patterns (musical notation) while receiving continuous multisensory feedback. This association learning can strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) while activating multimodal integration regions (e.g., around the intraparietal sulcus). We argue that training of this neural network may produce cross-modal effects on other behavioral or cognitive operations that draw on this network. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. These enhancements suggest the potential for music making as an interactive treatment or intervention for neurological and developmental disorders, as well as those associated with normal aging.", "title": "" }, { "docid": "92ab453b6fc05a8e017745ff9cd95329", "text": "The execution of numerically intensive programs presents a challenge to memory system designers. Numerical program execution can be accelerated by pipelined arithmetic units, but to be effective, must be supported by high speed memory access. A cache memory is a well known hardware mechanism used to reduce the average memory access latency. Numerical programs, however, often have poor cache performance. Stride directed prefetching has been proposed to improve the cache performance of numerical programs executing on a vector processor, This paper shows how this approach can be extended to a scalar processor by using a simple hardware mechanism, called a stride prediction table (SPT), to calculate the stride distances of array accesses made from within the loop body of a program. The results using selected programs from the PERFECT and SPEC benchmarks show that stride directed prefetching on a scalar processor can significantly reduce the cache miss rate of particular programs and a SPT need only a small number of entries to be effective. the cache miss ratio for the scalar execution of the matrix multiply for matrix sizes of 100 x 100. For comparison purposes the corresponding vector execution is also shown. The results were obtained using trace driven simulation of 2 4 Kbyte cache with block sizes of 8, 16,32 and 64 bytes. The traces are from executions on an Alliant FX/80. Each trace is for single processor execution where the scalar and vector versions are generated using compiler optimizations. Two miss ratios are shown for each execution; ALL means that all memory data references am simulated and MATRIX means that only references to matrix data (data size of 8 bytes) are simulated. There are 19 and 2.2 million references for scalar and vector executions respectively but only 4 and 2 million of these references are to matrix data. Note that the vector miss ratios are computed relative to the number of vector accesses and not the number of vector referencing instructions. For example, a vector instruction may load 32 elements but this is counted as 32 vector accesses.", "title": "" }, { "docid": "042fcc75e4541d27b97e8c2fe02a2ddf", "text": "Folk medicine suggests that pomegranate (peels, seeds and leaves) has anti-inflammatory properties; however, the precise mechanisms by which this plant affects the inflammatory process remain unclear. Herein, we analyzed the anti-inflammatory properties of a hydroalcoholic extract prepared from pomegranate leaves using a rat model of lipopolysaccharide-induced acute peritonitis. Male Wistar rats were treated with either the hydroalcoholic extract, sodium diclofenac, or saline, and 1 h later received an intraperitoneal injection of lipopolysaccharides. Saline-injected animals (i. p.) were used as controls. Animals were culled 4 h after peritonitis induction, and peritoneal lavage and peripheral blood samples were collected. Serum and peritoneal lavage levels of TNF-α as well as TNF-α mRNA expression in peritoneal lavage leukocytes were quantified. Total and differential leukocyte populations were analyzed in peritoneal lavage samples. Lipopolysaccharide-induced increases of both TNF-α mRNA and protein levels were diminished by treatment with either pomegranate leaf hydroalcoholic extract (57 % and 48 % mean reduction, respectively) or sodium diclofenac (41 % and 33 % reduction, respectively). Additionally, the numbers of peritoneal leukocytes, especially neutrophils, were markedly reduced in hydroalcoholic extract-treated rats with acute peritonitis. These results demonstrate that pomegranate leaf extract may be used as an anti-inflammatory drug which suppresses the levels of TNF-α in acute inflammation.", "title": "" }, { "docid": "cd014a0fcae02be9fb28c48d6b061c7e", "text": "Human choices are remarkably susceptible to the manner in which options are presented. This so-called \"framing effect\" represents a striking violation of standard economic accounts of human rationality, although its underlying neurobiology is not understood. We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases. Moreover, across individuals, orbital and medial prefrontal cortex activity predicted a reduced susceptibility to the framing effect. This finding highlights the importance of incorporating emotional processes within models of human choice and suggests how the brain may modulate the effect of these biasing influences to approximate rationality.", "title": "" }, { "docid": "db7a4ab8d233119806e7edf2a34fffd1", "text": "Recent research has shown that the performance of search personalization depends on the richness of user profiles which normally represent the user’s topical interests. In this paper, we propose a new embedding approach to learning user profiles, where users are embedded on a topical interest space. We then directly utilize the user profiles for search personalization. Experiments on query logs from a major commercial web search engine demonstrate that our embedding approach improves the performance of the search engine and also achieves better search performance than other strong baselines.", "title": "" }, { "docid": "ed012eec144e6f2f0257141404563928", "text": "This paper presents a new direct active and reactive power control (DPC) of grid-connected doubly fed induction generator (DFIG)-based wind turbine systems. The proposed DPC strategy employs a nonlinear sliding-mode control scheme to directly calculate the required rotor control voltage so as to eliminate the instantaneous errors of active and reactive powers without involving any synchronous coordinate transformations. Thus, no extra current control loops are required, thereby simplifying the system design and enhancing the transient performance. Constant converter switching frequency is achieved by using space vector modulation, which eases the designs of the power converter and the ac harmonic filter. Simulation results on a 2-MW grid-connected DFIG system are provided and compared with those of classic voltage-oriented vector control (VC) and conventional lookup table (LUT) DPC. The proposed DPC provides enhanced transient performance similar to the LUT DPC and keeps the steady-state harmonic spectra at the same level as the VC strategy.", "title": "" }, { "docid": "090f6460180573922dc86866033124c6", "text": "In a dc distribution system, where multiple power sources supply a common bus, current sharing is an important issue. When renewable energy resources are considered, such as photovoltaic (PV), dc/dc converters are needed to decouple the source voltage, which can vary due to operating conditions and maximum power point tracking (MPPT), from the dc bus voltage. Since different sources may have different power delivery capacities that may vary with time, coordination of the interface to the bus is of paramount importance to ensure reliable system operation. Further, since these sources are most likely distributed throughout the system, distributed controls are needed to ensure a robust and fault tolerant control system. This paper presents a model predictive control-based MPPT and model predictive control-based droop current regulator to interface PV in smart dc distribution systems. Back-to-back dc/dc converters control both the input current from the PV module and the droop characteristic of the output current injected into the distribution bus. The predictive controller speeds up both of the control loops, since it predicts and corrects error before the switching signal is applied to the respective converter.", "title": "" }, { "docid": "cad7acb95f74628fa81cc6a4e1c85e8e", "text": "For patient and personnel safety, agitated and violent individuals are sometime physically restrained during out-of-hospital ambulance transport. We report two cases of unexpected death in restrained, agitated individuals while they were being trans-ported by advanced life support ambulance. Both patients had been placed in hobble restraints by law enforcement. At autopsy, toxicologic analysis revealed nonlethal levels of amphetamines in one patient and nonlethal levels of ethanol, cocaine, and amphetamines in the other. In both cases the cause of death was determined to be positional asphyxiation during restraint for excited delirium. Physicians and emergency service personnel should be aware of the potential complications of using physical restraints for control of agitated patients.", "title": "" }, { "docid": "88302ac0c35e991b9db407f268fdb064", "text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.", "title": "" }, { "docid": "15eb2816764256d6227409d30e862f88", "text": "This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive „heads-up‟ visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier‟s view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina – Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°x30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-onTarget) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.", "title": "" }, { "docid": "8452091115566adaad8a67154128dff8", "text": "© The Ecological Society of America www.frontiersinecology.org T Millennium Ecosystem Assessment (MA) advanced a powerful vision for the future (MA 2005), and now it is time to deliver. The vision of the MA – and of the prescient ecologists and economists whose work formed its foundation – is a world in which people and institutions appreciate natural systems as vital assets, recognize the central roles these assets play in supporting human well-being, and routinely incorporate their material and intangible values into decision making. This vision is now beginning to take hold, fueled by innovations from around the world – from pioneering local leaders to government bureaucracies, and from traditional cultures to major corporations (eg a new experimental wing of Goldman Sachs; Daily and Ellison 2002; Bhagwat and Rutte 2006; Kareiva and Marvier 2007; Ostrom et al. 2007; Goldman et al. 2008). China, for instance, is investing over 700 billion yuan (about US$102.6 billion) in ecosystem service payments, in the current decade (Liu et al. 2008). The goal of the Natural Capital Project – a partnership between Stanford University, The Nature Conservancy, and World Wildlife Fund (www.naturalcapitalproject.org) – is to help integrate ecosystem services into everyday decision making around the world. This requires turning the valuation of ecosystem services into effective policy and finance mechanisms – a problem that, as yet, no one has solved on a large scale. A key challenge remains: relative to other forms of capital, assets embodied in ecosystems are often poorly understood, rarely monitored, and are undergoing rapid degradation (Heal 2000a; MA 2005; Mäler et al. 2008). The importance of ecosystem services is often recognized only after they have been lost, as was the case following Hurricane Katrina (Chambers et al. 2007). Natural capital, and the ecosystem services that flow from it, are usually undervalued – by governments, businesses, and the public – if indeed they are considered at all (Daily et al. 2000; Balmford et al. 2002; NRC 2005). Two fundamental changes need to occur in order to replicate, scale up, and sustain the pioneering efforts that are currently underway, to give ecosystem services weight in decision making. First, the science of ecosystem services needs to advance rapidly. In promising a return (of services) on investments in nature, the scientific community needs to deliver the knowledge and tools necessary to forecast and quantify this return. To help address this challenge, the Natural Capital Project has developed InVEST (a system for Integrated Valuation of Ecosystem ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES", "title": "" }, { "docid": "678d9eab7d1e711f97bf8ef5aeaebcc4", "text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.", "title": "" }, { "docid": "a5d568b4a86dcbda2c09894c778527ea", "text": "INTRODUCTION\nHypoglycemia (Hypo) is the most common side effect of insulin therapy in people with type 1 diabetes (T1D). Over time, patients with T1D become unaware of signs and symptoms of Hypo. Hypo unawareness leads to morbidity and mortality. Diabetes alert dogs (DADs) represent a unique way to help patients with Hypo unawareness. Our group has previously presented data in abstract form which demonstrates the sensitivity and specificity of DADS. The purpose of our current study is to expand evaluation of DAD sensitivity and specificity using a method that reduces the possibility of trainer bias.\n\n\nMETHODS\nWe evaluated 6 dogs aging 1-10 years old who had received an average of 6 months of training for Hypo alert using positive training methods. Perspiration samples were collected from patients during Hypo (BG 46-65 mg/dL) and normoglycemia (BG 85-136 mg/dl) and were used in training. These samples were placed in glass vials which were then placed into 7 steel cans (1 Hypo, 2 normal, 4 blank) randomly placed by roll of a dice. The dogs alerted by either sitting in front of, or pushing, the can containing the Hypo sample. Dogs were rewarded for appropriate recognition of the Hypo samples using a food treat via a remote control dispenser. The results were videotaped and statistically evaluated for sensitivity (proportion of lows correctly alerted, \"true positive rate\") and specificity (proportion of blanks + normal samples not alerted, \"true negative rate\") calculated after pooling data across all trials for all dogs.\n\n\nRESULTS\nAll DADs displayed statistically significant (p value <0.05) greater sensitivity (min 50.0%-max 87.5%) to detect the Hypo sample than the expected random correct alert of 14%. Specificity ranged from a min of 89.6% to a max of 97.9% (expected rate is not defined in this scenario).\n\n\nCONCLUSIONS\nOur results suggest that properly trained DADs can successfully recognize and alert to Hypo in an in vitro setting using smell alone.", "title": "" }, { "docid": "19a9d9286f5af35bac3e051e9bc5213b", "text": "The healthcare environment is more and more data enriched, but the amount of knowledge getting from those data is very less, because lack of data analysis tools. We need to get the hidden relationships from the data. In the healthcare system to predict the heart attack perfectly, there are some techniques which are already in use. There is some lack of accuracy in the available techniques like Naïve Bayes. Here, this paper proposes the system which uses neural network and Decision tree (ID3) to predict the heart attacks. Here the dataset with 6 attributes is used to diagnose the heart attacks. The dataset used is acath heart attack dataset provided by UCI machine learning repository. The results of the prediction give more accurate output than the other techniques.", "title": "" }, { "docid": "73594825d26212ad974c9b932c9245dd", "text": "BACKGROUND\nFamilial partial lipodystrophies are rare monogenic disorders that are often associated with diabetes. In such cases, it can be difficult to achieve glycaemic control.\n\n\nCASE REPORT\nWe report a 34-year old woman with familial partial lipodystrophy type 2 (Dunnigan) and diabetes; her hyperglycaemia persisted despite metformin treatment. A combined intravenous glucose tolerance-euglycaemic clamp test showed severe insulin resistance, as expected, but also showed strongly diminished first-phase insulin secretion. After the latter finding, we added the glucagon-like peptide-1 receptor agonist liraglutide to the patient's treatment regimen, which rapidly normalized plasma glucose levels. HbA1c values <42 mmol/mol (6.0%) have now been maintained for over 4 years.\n\n\nCONCLUSION\nThis case suggests that a glucagon-like peptide-1 receptor agonist may be a useful component of glucose-lowering therapy in individuals with familial partial lipodystrophy and diabetes mellitus.", "title": "" }, { "docid": "4fee0cba7a71b074db0bcf922cc111ae", "text": "The ascendance of emotion theory, recent advances in cognitive science and neuroscience, and increasingly important findings from developmental psychology and learning make possible an integrative account of the nature and etiology of anxiety and its disorders. This model specifies an integrated set of triple vulnerabilities: a generalized biological (heritable) vulnerability, a generalized psychological vulnerability based on early experiences in developing a sense of control over salient events, and a more specific psychological vulnerability in which one learns to focus anxiety on specific objects or situations. The author recounts the development of anxiety and related disorders based on these triple vulnerabilities and discusses implications for the classification of emotional disorders.", "title": "" }, { "docid": "47ad04e8c93d39a500ab79a6d25d32f0", "text": "OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level of completeness with regard to calibrated geometric vision algorithms, and it is the first library with a dedicated focus on a unified real-time usage of non-central multi-camera systems, which are increasingly popular in robotics and in the automotive industry. This paper introduces OpenGV's flexible interface and abstraction for multi-camera systems, and outlines the performance of all contained algorithms. It is our hope that the introduction of this open-source platform will motivate people to use it and potentially also include more algorithms, which would further contribute to the general accessibility of geometric vision algorithms, and build a common playground for the fair comparison of different solutions.", "title": "" }, { "docid": "ba1d1f2cfeac871bf63164cb0b431af9", "text": "The motivation behind model-driven software development is to move the focus of work from programming to solution modeling. The model-driven approach has a potential to increase development productivity and quality by describing important aspects of a solution with more human-friendly abstractions and by generating common application fragments with templates. For this vision to become reality, software development tools need to automate the many tasks of model construction and transformation, including construction and transformation of models that can be round-trip engineered into code. In this article, we briefly examine different approaches to model transformation and offer recommendations on the desirable characteristics of a language for describing model transformations. In doing so, we are hoping to offer a measuring stick for judging the quality of future model transformation technologies.", "title": "" }, { "docid": "617382c83d0af103e977edb3b5b2fba1", "text": "With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important mobile application, especially when users travel away from home. However, this type of recommendation is very challenging compared to traditional recommender systems. A user may visit only a limited number of spatial items, leading to a very sparse user-item matrix. This matrix becomes even sparser when the user travels to a distant place, as most of the items visited by a user are usually located within a short distance from the user’s home. Moreover, user interests and behavior patterns may vary dramatically across different time and geographical regions. In light of this, we propose ST-SAGE, a spatial-temporal sparse additive generative model for spatial item recommendation in this article. ST-SAGE considers both personal interests of the users and the preferences of the crowd in the target region at the given time by exploiting both the co-occurrence patterns and content of spatial items. To further alleviate the data-sparsity issue, ST-SAGE exploits the geographical correlation by smoothing the crowd’s preferences over a well-designed spatial index structure called the spatial pyramid. To speed up the training process of ST-SAGE, we implement a parallel version of the model inference algorithm on the GraphLab framework. We conduct extensive experiments; the experimental results clearly demonstrate that ST-SAGE outperforms the state-of-the-art recommender systems in terms of recommendation effectiveness, model training efficiency, and online recommendation efficiency.", "title": "" } ]
scidocsrr
4841900ad160a1834d5707296656c2c2
A system for acquiring, processing, and rendering panoramic light field stills for virtual reality
[ { "docid": "dfb95120d19a363a27d162b598cdcf26", "text": "Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data.", "title": "" }, { "docid": "f5da20b4dcdabe473efbd3fd0dea1049", "text": "A surface light field is a function that assigns a color to each ray originating on a surface. Surface light fields are well suited to constructing virtual images of shiny objects under complex lighting conditions. This paper presents a framework for construction, compression, interactive rendering, and rudimentary editing of surface light fields of real objects. Generalization of vector quantization and principal component analysis are used to construct a compressed representation of an object's surface light field from photographs and range scans. A new rendering algorithm achieves interactive rendering of images from the compressed representation, incorporating view-dependent geometric level-of-detail control. The surface light field representation can also be directly edited to yield plausible surface light fields for small changes in surface geometry and reflectance properties.", "title": "" }, { "docid": "acefbbb42607f2d478a16448644bd6e6", "text": "The time complexity of incremental structure from motion (SfM) is often known as O(n^4) with respect to the number of cameras. As bundle adjustment (BA) being significantly improved recently by preconditioned conjugate gradient (PCG), it is worth revisiting how fast incremental SfM is. We introduce a novel BA strategy that provides good balance between speed and accuracy. Through algorithm analysis and extensive experiments, we show that incremental SfM requires only O(n) time on many major steps including BA. Our method maintains high accuracy by regularly re-triangulating the feature matches that initially fail to triangulate. We test our algorithm on large photo collections and long video sequences with various settings, and show that our method offers state of the art performance for large-scale reconstructions. The presented algorithm is available as part of VisualSFM at http://homes.cs.washington.edu/~ccwu/vsfm/.", "title": "" } ]
[ { "docid": "f6a149131a816989ae246a6de0c50dbc", "text": "In this paper a comparison of outlier detection algorithms is presented, we present an overview on outlier detection methods and experimental results of six implemented methods. We applied these methods for the prediction of stellar populations parameters as well as on machine learning benchmark data, inserting artificial noise and outliers. We used kernel principal component analysis in order to reduce the dimensionality of the spectral data. Experiments on noisy and noiseless data were performed.", "title": "" }, { "docid": "bbd1e7e579d2543be236a5f69cf42981", "text": "To date, there is almost no work on the use of adverbs in sentiment analysis, nor has there been any work on the use of adverb-adjective combinations (AACs). We propose an AAC-based sentiment analysis technique that uses a linguistic analysis of adverbs of degree. We define a set of general axioms (based on a classification of adverbs of degree into five categories) that all adverb scoring techniques must satisfy. Instead of aggregating scores of both adverbs and adjectives using simple scoring functions, we propose an axiomatic treatment of AACs based on the linguistic classification of adverbs. Three specific AAC scoring methods that satisfy the axioms are presented. We describe the results of experiments on an annotated set of 200 news articles (annotated by 10 students) and compare our algorithms with some existing sentiment analysis algorithms. We show that our results lead to higher accuracy based on Pearson correlation with human subjects.", "title": "" }, { "docid": "7db555e42bff7728edb8fb199f063cba", "text": "The need for more post-secondary students to major and graduate in STEM fields is widely recognized. Students' motivation and strategic self-regulation have been identified as playing crucial roles in their success in STEM classes. But, how students' strategy use, self-regulation, knowledge building, and engagement impact different learning outcomes is not well understood. Our goal in this study was to investigate how motivation, strategic self-regulation, and creative competency were associated with course achievement and long-term learning of computational thinking knowledge and skills in introductory computer science courses. Student grades and long-term retention were positively associated with self-regulated strategy use and knowledge building, and negatively associated with lack of regulation. Grades were associated with higher study effort and knowledge retention was associated with higher study time. For motivation, higher learning- and task-approach goal orientations, endogenous instrumentality, and positive affect and lower learning-, task-, and performance-avoid goal orientations, exogenous instrumentality and negative affect were associated with higher grades and knowledge retention and also with strategic self-regulation and engagement. Implicit intelligence beliefs were associated with strategic self-regulation, but not grades or knowledge retention. Creative competency was associated with knowledge retention, but not grades, and with higher strategic self-regulation. Implications for STEM education are discussed.", "title": "" }, { "docid": "57290d8e0a236205c4f0ce887ffed3ab", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "096bc66bb6f4c04109cf26d9d474421c", "text": "A statistical analysis of full text downloads of articles in Elsevier's ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.", "title": "" }, { "docid": "26e79793addc4750dcacc0408764d1e1", "text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.", "title": "" }, { "docid": "4a857c91833176f4ac2cc4a0ca04e9b7", "text": "Effective design and fabrication of 3-D electronic circuits are among the most pressing issues for future engineering. Although a variety of flexible devices have been developed, most of them are still designed two-dimensionally. In this letter, we introduce a novel idea to fabricate a 3-D wiring board. We produced the 3-D wiring board from one desktop inkjet printer by printing conductive pattern and a 2-D pattern to induce self-folding. We printed silver ink onto a paper to realize the conductive trace. Meanwhile, a 3-D structure was constructed with self-folding induced by water-based ink printed from the same printer. The paper with the silver ink self-folds along the printed line. The printed silver ink is sufficiently thin to be flexible. Even if the silver ink is already printed, the paper can self-fold or self-bend to consist the 3-D wiring board. A paper scratch driven robot was developed using this method. The robot traveled 56 mm in 15 s according to the vibration induced by the electrostatic force of the printed electrode. The size of the robot is 30 × 15 × 10 mm. This work proposes a new method to design 3-D wiring board, and shows extended possibilities for printed paper mechatronics.", "title": "" }, { "docid": "72142ddc1ad3906fd0b1320ab3a1e48f", "text": "The American Herbal Pharmacopoeia (AHP) today announced the release of a section of the soon-to-be-completed Cannabis Therapeutic Compendium Cannabis in the Management and Treatment of Seizures and Epilepsy. This scientific review is one of numerous scientific reviews that will encompass the broad range of science regarding the therapeutic effects and safety of cannabis. In recent months there has been considerable attention given to the potential benefit of cannabis for treating intractable seizure disorders including rare forms of epilepsy. For this reason, the author of the section, Dr. Ben Whalley, and AHP felt it important to release this section, in its near-finalized form, into the public domain for free dissemination. The full release of AHP's Therapeutic Compendium is scheduled for early 2014. Dr. Whalley is a Senior Lecturer in Pharmacology and Pharmacy Director of Research at the School of Pharmacy of the University of Reading in the United Kingdom. He is also a member of the UK Epilepsy Research Network. Dr. Whalley's research interests lie in investigating neuronal processes that underlie complex physiological functions such as neuronal hyperexcitability states and their consequential disorders such as epilepsy, ataxia and dystonias, as well as learning and memory. Since 2003, Dr. Whalley has authored and co-authored numerous scientific peer-reviewed papers on the potential effects of cannabis in relieving seizure disorders and investigating the underlying pathophysiological mechanisms of these disorders. The release of this comprehensive review is timely given the growing claims being made for cannabis to relieve even the most severe forms of seizures. According to Dr. Whalley: \" Recent announcements of regulated human clinical trials of pure components of cannabis for the treatment of epilepsy have raised hopes among patients with drug-resistant epilepsy, their caregivers, and clinicians. Also, claims in the media of the successful use of cannabis extracts for the treatment of epilepsies, particularly in children, have further highlighted the urgent need for new and effective treatments. \" However, Dr. Whalley added, \" We must bear in mind that the use of any new treatment, particularly in the critically ill, carries inherent risks. Releasing this section of the monograph into the public domain at this time provides clinicians, patients, and their caregivers with a single document that comprehensively summarizes the scientific knowledge to date regarding cannabis and epilepsy and so fully support informed, evidence-based decision making. \" This release also follows recommendations of the Epilepsy Foundation, which has called for increasing medical …", "title": "" }, { "docid": "f1dc6bc187668d773a193f01ef79fd5c", "text": "Nowadays, the research on robot on-map localization while using landmarks is more intensively dealing with visual code recognition. One of the most popular landmarks of this type is the QR-code. This paper is devoted to the experimental evaluation of vision-based on-map localization procedures that apply QR-codes or NAO marks, as implemented in service robot control systems. In particular, the NAO humanoid robot is our test-bed platform, while the use of robotic systems for hazard detection is the motivation of this study. Especially, the robot can be a useful aid for elderly people affected by dementia and cognitive disorientation. The detection of the door opening is assumed to be important to ensure safety in the home environment. Thus, the paper focus on door opening detection while using QR-codes.", "title": "" }, { "docid": "6347b4594d9bf79cf1ec03711ad79176", "text": "The paper deals with a Wireless Sensor Network (WSN) as a reliable solution for capturing the kinematics of a fire front spreading over a fuel bed. To provide reliable information in fire studies and support fire fighting strategies, a Wireless Sensor Network must be able to perform three sequential actions: 1) sensing thermal data in the open as the gas temperature; 2) detecting a fire i.e., the spatial position of a flame; 3) tracking the fire spread during its spatial and temporal evolution. One of the great challenges in performing fire front tracking with a WSN is to avoid the destruction of motes by the fire. This paper therefore shows the performance of Wireless Sensor Network when the motes are protected with a thermal insulation dedicated to track a fire spreading across vegetative fuels on a field scale. The resulting experimental WSN is then used in series of wildfire experiments performed in the open in vegetation areas ranging in size from 50 to 1,000 m(2).", "title": "" }, { "docid": "a1b42797757640593103412764c70b7c", "text": "With the recent advances in wireless communication and cloud computing technology, conventional robots can enhance their capabilities on the fly. A number of functionalities can be offered to robots for which they were not initially designed for such as heavy computation offloading, navigation, map building, etc. In this paper, we have offered a cloud-enabled framework for robots where along with offloading their computation they can also use navigation, map building, path planning, etc. as a service. In order to validate the working of our framework, a RTAB-map (Real Time Appearance Based Mapping) service is built for mobile robots. In this, we have incorporated SOA (Service Oriented Architecture) design to standardized the RTAB-map service, i.e., service listing are defined via WSDL (Web Service Definition Language) and communicated over SOAP (Simple Object Access Protocol) protocol respectively. Some simulation results of service execution and cloud usage analysis are also presented to support the proposed system.", "title": "" }, { "docid": "78921cbdbc80f714598d8fb9ae750c7e", "text": "Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog, the so-called warded Datalog±, under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. This use of Datalog± is extended to give a set semantics to duplicates in Datalog± itself. We investigate the properties of the resulting Datalog± programs, the problem of deciding multiplicities, and expressibility of some bag operations. Moreover, the proposed translation has the potential for interesting applications such as to Multiset Relational Algebra and the semantic web query language SPARQL with bag semantics. 2012 ACM Subject Classification Information systems → Query languages; Theory of computation → Logic; Theory of computation → Semantics and reasoning", "title": "" }, { "docid": "f89d3ab1c5340d47d68bfda94853d934", "text": "PURPOSE\nThe aim of the present systematic review was to address the following question: in patients treated with dental implants placed in pristine bone, what are the clinical and radiographic outcomes of bone-level (BL) implants in comparison to tissue-level (TL) implants after restoration with dental prostheses?\n\n\nMATERIALS AND METHODS\nScanning of online literature databases from 1966 to January 2012, supplemented by hand searching, was conducted to identify relevant prospective randomized controlled trials, controlled clinical trials, and cohort studies. Sequential screenings at the title, abstract, and full-text levels were performed independently and in duplicate. A meta-analysis was conducted to compile data from the primary studies included in this systematic review.\n\n\nRESULTS\nThe search strategy revealed a total of 5,998. Screening at the title level resulted in 752 papers, while screening at the abstract level yielded 92 publications. Full-text reading identified nine articles that fulfilled the inclusion criteria of this review. The pooled estimated difference between BL and TL implants in mean marginal bone loss was 0.05 mm (95% confidence interval [CI], -0.03 to 0.13 mm), with no statistically significant difference between the groups at 1 year after placement of the definitive prostheses. The relative risk of implant loss was estimated at 1.00 (95% CI, 0.99 to 1.02) at 1 year and at 1.01 (95% CI, 0.99 to 1.03) at 3 years after restoration, indicating no evidence of an increased risk of implant loss in BL compared to TL implants.\n\n\nCONCLUSIONS\nNo statistically significant differences in bone loss and survival rates were detected between BL and TL dental implants over a short-term observation period (1 to 3 years). Thus, both implant systems fulfill the requirements for the replacement of missing teeth in implant dentistry.", "title": "" }, { "docid": "0801dc8a870053ba36c0db9d25314cfb", "text": "Crowdsourcing is a new emerging distributed computing and business model on the backdrop of Internet blossoming. With the development of crowdsourcing systems, the data size of crowdsourcers, contractors and tasks grows rapidly. The worker quality evaluation based on big data analysis technology has become a critical challenge. This paper first proposes a general worker quality evaluation algorithm that is applied to any critical tasks such as tagging, matching, filtering, categorization and many other emerging applications, without wasting resources. Second, we realize the evaluation algorithm in the Hadoop platform using the MapReduce parallel programming model. Finally, to effectively verify the accuracy and the effectiveness of the algorithm in a wide variety of big data scenarios, we conduct a series of experiments. The experimental results demonstrate that the proposed algorithm is accurate and effective. It has high computing performance and horizontal scalability. And it is suitable for large-scale worker quality evaluations in a big data environment.", "title": "" }, { "docid": "1497e47ada570797e879bbc4aba432a1", "text": "The mental health of university students is an area of increasing concern worldwide. The objective of this study is to examine the prevalence of depression, anxiety and stress among a group of Turkish university students. Depression Anxiety and Stress Scale (DASS-42) completed anonymously in the students’ respective classrooms by 1,617 students. Depression, anxiety and stress levels of moderate severity or above were found in 27.1, 47.1 and 27% of our respondents, respectively. Anxiety and stress scores were higher among female students. First- and second-year students had higher depression, anxiety and stress scores than the others. Students who were satisfied with their education had lower depression, anxiety and stress scores than those who were not satisfied. The high prevalence of depression, anxiety and stress symptoms among university students is alarming. This shows the need for primary and secondary prevention measures, with the development of adequate and appropriate support services for this group.", "title": "" }, { "docid": "bd590555337d3ada2c641c5f1918cf2c", "text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.", "title": "" }, { "docid": "b8ed09081032a790b1c5c4bb3afebfff", "text": "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. There are two components: i) The face proposal component computes face proposals via estimating facial key-points and the 3D transformation parameters for each predicted keypoint w.r.t. the 3D mean face model. ii) The face verification component computes detection results by refining proposals based on configuration pooling.", "title": "" }, { "docid": "1310a8cc022b2b9f0fd20aaefa15dee5", "text": "An athletic profile should encompass the physiological, biomechanical, anthropometric and performance measures pertinent to the athlete's sport and discipline. The measurement systems and procedures used to create these profiles are constantly evolving and becoming more precise and practical. This is a review of strength and ballistic assessment methodologies used in sport, a critique of current maximum strength [one-repetition maximum (1RM) and isometric strength] and ballistic performance (bench throw and jump capabilities) assessments for the purpose of informing practitioners and evolving current assessment methodologies. The reliability of the various maximum strength and ballistic assessment methodologies were reported in the form of intra-class correlation coefficients (ICC) and coefficient of variation (%CV). Mean percent differences (Mdiff = [/Xmethod1 - Xmethod2/ / (Xmethod1 + Xmethod2)] x 100) and effect size (ES = [Xmethod2 - Xmethod1] ÷ SDmethod1) calculations were used to assess the magnitude and spread of methodological differences for a given performance measure of the included studies. Studies were grouped and compared according to their respective performance measure and movement pattern. The various measurement systems (e.g., force plates, position transducers, accelerometers, jump mats, optical motion sensors and jump-and-reach apparatuses) and assessment procedures (i.e., warm-up strategies, loading schemes and rest periods) currently used to assess maximum isometric squat and mid-thigh pull strength (ICC > 0.95; CV < 2.0%), 1RM bench press, back squat and clean strength (ICC > 0.91; CV < 4.3%), and ballistic (vertical jump and bench throw) capabilities (ICC > 0.82; CV < 6.5%) were deemed highly reliable. The measurement systems and assessment procedures employed to assess maximum isometric strength [M(Diff) = 2-71%; effect size (ES) = 0.13-4.37], 1RM strength (M(Diff) = 1-58%; ES = 0.01-5.43), vertical jump capabilities (M(Diff) = 2-57%; ES = 0.02-4.67) and bench throw capabilities (M(Diff) = 7-27%; ES = 0.49-2.77) varied greatly, producing trivial to very large effects on these respective measures. Recreational to highly trained athletes produced maximum isometric squat and mid-thigh pull forces of 1,000-4,000 N; and 1RM bench press, back squat and power clean values of 80-180 kg, 100-260 kg and 70-140 kg, respectively. Mean and peak power production across the various loads (body mass to 60% 1RM) were between 300 and 1,500 W during the bench throw and between 1,500 and 9,000 W during the vertical jump. The large variations in maximum strength and power can be attributed to the wide range in physical characteristics between different sports and athletic disciplines, training and chronological age as well as the different measurement systems of the included studies. The reliability and validity outcomes suggest that a number of measurement systems and testing procedures can be implemented to accurately assess maximum strength and ballistic performance in recreational and elite athletes, alike. However, the reader needs to be cognisant of the inherent differences between measurement systems, as selection will inevitably affect the outcome measure. The strength and conditioning practitioner should also carefully consider the benefits and limitations of the different measurement systems, testing apparatuses, attachment sites, movement patterns (e.g., direction of movement, contraction type, depth), loading parameters (e.g., no load, single load, absolute load, relative load, incremental loading), warm-up strategies, inter-trial rest periods, dependent variables of interest (i.e., mean, peak and rate dependent variables) and data collection and processing techniques (i.e., sampling frequency, filtering and smoothing options).", "title": "" }, { "docid": "2453c15b322a309fdab77ad3fca917d6", "text": "STUDY OBJECTIVES\nTo explore the diagnostic performance of MRI for the diagnosis of acute myocarditis, using a comprehensive imaging approach.\n\n\nDESIGN AND SETTINGS\nTwenty patients with myocarditis and 7 age-matched and gender-matched control subjects underwent comprehensive MRI. Magnetic resonance (MR) examinations included axial T2-weighted sequences, precontrast and postcontrast ECG-gated T1-weighted sequences in axial and short heart axis, cine-MRI, and serial dynamic turbo fast low-angle shot (turboFLASH) acquisitions in the short axis following Gd injection for a period of 2 min. Precontrast and postcontrast images were postprocessed using subtraction. Two observers read all images qualitatively and quantitatively. Myocardial enhancement was compared between patients and control subjects.\n\n\nPATIENTS\nMyocardial involvement was focal in 6 patients examined within 1 week from clinical onset, and diffuse in the remaining 14 patients examined later.\n\n\nRESULTS\nQualitatively, contrast-enhanced T1-weighted subtracted images had 100% sensitivity and specificity for myocardial involvement. Postcontrast T1-weighted images were able to discriminate the early phase (nodular enhancement) from the later phase of myocarditis (diffuse enhancement). Quantitatively, myocardial enhancement was 56% +/- 3.2% in patients, vs 29% +/- 3.1% in control subjects using T1-weighted MRI (p < 0.0001). Serial turboFLASH images displayed greater myocardial enhancement between 25 s and 120 s in patients than in control subjects (p < 0.0001); however, there was marked enhancement of skeletal muscles in both early and late stages of myocarditis compared to control subjects (p < 0.0001).\n\n\nCONCLUSION\nOn the basis of subtracted cardiac-gated T1-weighted images and serial postinjection turboFLASH images, our study shows that myocarditis is largely, at least in the early stages, a focal process in the myocardium. It also provides evidence of transient skeletal muscle involvement, which may actually be useful for diagnosis.", "title": "" } ]
scidocsrr
f7d0292d545a2890648ed5d78c1991c1
Gated Recurrent Convolution Neural Network for OCR
[ { "docid": "65af21566422d9f0a11f07d43d7ead13", "text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.", "title": "" }, { "docid": "8d5dd3f590dee87ea609278df3572f6e", "text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.", "title": "" }, { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" } ]
[ { "docid": "f5bc721d2b63912307c4ad04fb78dd2c", "text": "When women perform math, unlike men, they risk being judged by the negative stereotype that women have weaker math ability. We call this predicament st reotype threat and hypothesize that the apprehension it causes may disrupt women’s math performance. In Study 1 we demonstrated that the pattern observed in the literature that women underperform on difficult (but not easy) math tests was observed among a highly selected sample of men and women. In Study 2 we demonstrated that this difference in performance could be eliminated when we lowered stereotype threat by describing the test as not producing gender differences. However, when the test was described as producing gender differences and stereotype threat was high, women performed substantially worse than equally qualified men did. A third experiment replicated this finding with a less highly selected population and explored the mediation of the effect. The implication that stereotype threat may underlie gender differences in advanced math performance, even", "title": "" }, { "docid": "2e8a6a21feef0ae7d02eca7da5359797", "text": "This thesis reports the construction of a novel apparatus for experiments with ultracold atoms in optical lattices: the Fermi gas microscope. Improving upon similar designs for bosonic atoms, our Fermi gas microscope has the novel feature of being able to achieve single-site resolved imaging of fermionic atoms in an optical lattice; specifically, we use fermionic potassium-40, sympathetically cooled by bosonic sodium-23. In this thesis, several milestones on the way to achieving single-site resolution are described and documented. First, we have tested and mounted in place the imaging optics necessary for achieving single-site resolution. We set up separate 3D magnetooptical traps for capturing and cooling both Na and K. These species are then trapped simultaneously in a plugged quadrupole magnetic trap and evaporated to degeneracy; we obtain a sodium Bose-Einstein condensate with about a million atoms and a degenerate potassium cloud cooled to colder than 1 μK. Using magnetic transport over a distance of 1 cm, we move the cold cloud of atoms into place under the high-resolution imaging system and capture it in a hybrid magnetic and optical-dipole trap. Further evaporation in this hybrid trap performed by lowering the optical trap depth, and the cooled atoms are immersed in an optical lattice, the setup and calibration of which is also described here. Finally, we cool the atoms with optical molasses beams while in the lattice, with the imaging optics collecting the fluoresence light for high-resolution imaging. With molasses cooling set up, single-site fluoresence imaging of bosons and fermions in the same experimental apparatus is within reach. Thesis Supervisor: Martin W. Zwierlein Title: Professor of Physics", "title": "" }, { "docid": "80e9f9261397cb378920a6c897fd352a", "text": "Purpose: This study develops a comprehensive research model that can explain potential customers’ behavioral intentions to adopt and use smart home services. Methodology: This study proposes and validates a new theoretical model that extends the theory of planned behavior (TPB). Partial least squares analysis (PLS) is employed to test the research model and corresponding hypotheses on data collected from 216 survey samples. Findings: Mobility, security/privacy risk, and trust in the service provider are important factors affecting the adoption of smart home services. Practical implications: To increase potential users’ adoption rate, service providers should focus on developing mobility-related services that enable people to access smart home services while on the move using mobile devices via control and monitoring functions. Originality/Value: This study is the first empirical attempt to examine user acceptance of smart home services, as most of the prior literature has concerned technical features.", "title": "" }, { "docid": "5ab1d4704e0f6c03fa96b6d530fcc6f8", "text": "The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art superresolution methods.", "title": "" }, { "docid": "49fddbf79a836e2ae9f297b32fb3681d", "text": "Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentiongan.", "title": "" }, { "docid": "8210e2eec6a7a6905bdf57e685289d92", "text": "Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.", "title": "" }, { "docid": "d30f40e879ae7c5b49b4be94679c7424", "text": "Java offers the basic infrastructure needed to integrate computers connected to the Internet into a seamless parallel computational resource: a flexible, easily-installed infrastructure for running coarsegrained parallel applications on numerous, anonymous machines. Ease of participation is seen as a key property for such a resource to realize the vision of a multiprocessing environment comprising thousands of computers. We present Javelin, a Java-based infrastructure for global computing. The system is based on Internet software technology that is essentially ubiquitous: Web technology. Its architecture and implementation require participants to have access only to a Java-enabled Web browser. The security constraints implied by this, the resulting architecture, and current implementation are presented. The Javelin architecture is intended to be a substrate on which various programming models may be implemented. Several such models are presented: A Linda Tuple Space, an SPMD programming model with barriers, as well as support for message passing. Experimental results are given in the form of micro-benchmarks and a Mersenne Prime application that runs on a heterogeneous network of several parallel machines, workstations, and PCs.", "title": "" }, { "docid": "7a5fb2c77cfe49e6c6070d6d9e555116", "text": "Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition. We propose a feature imitation framework in which an implicit relation network is driven to learn from another neural network with access to connectives, and thus encouraged to extract similarly salient features for accurate classification. We develop an adversarial model to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator. Our method effectively transfers discriminability of connectives to the implicit features, and achieves state-of-the-art performance on the PDTB benchmark.", "title": "" }, { "docid": "b37de4587fbadad9258c1c063b03a07a", "text": "Numerous attacks, such as worms, phishing, and botnets, threaten the availability of the Internet, the integrity of its hosts, and the privacy of its users. A core element of defense against these attacks is anti-virus(AV)–a service that detects, removes, and characterizes these threats. The ability of these products to successfully characterize these threats has far-reaching effects—from facilitating sharing across organizations, to detecting the emergence of new threats, and assessing risk in quarantine and cleanup. In this paper, we examine the ability of existing host-based anti-virus products to provide semantically meaningful information about the malicious software and tools (or malware) used by attackers. Using a large, recent collection of malware that spans a variety of attack vectors (e.g., spyware, worms, spam), we show that different AV products characterize malware in ways that are inconsistent across AV products, incomplete across malware, and that fail to be concise in their semantics. To address these limitations, we propose a new classification technique that describes malware behavior in terms of system state changes (e.g., files written, processes created) rather than in sequences or patterns of system calls. To address the sheer volume of malware and diversity of its behavior, we provide a method for automatically categorizing these profiles of malware into groups that reflect similar classes of behaviors and demonstrate how behavior-based clustering provides a more direct and effective way of classifying and analyzing Internet malware.", "title": "" }, { "docid": "dc4c5cfb41bfdb84c56183601f922b4f", "text": "Sample selection bias is a common problem encountered when using data mining algorithms for many real-world applications. Traditionally, it is assumed that training and test data are sampled from the same probability distribution, the so called “stationary or non-biased distribution assumption.” However, this assumption is often violated in reality. Typical examples include marketing solicitation, fraud detection, drug testing, loan approval, school enrollment, etc. For these applications the only labeled data available for training is a biased representation, in various ways, of the future data on which the inductive model will predict. Intuitively, some examples sampled frequently into the training data may actually be infrequent in the testing data, and vice versa. When this happens, an inductive model constructed from biased training set may not be as accurate on unbiased testing data if there had not been any selection bias in the training data. In this paper, we first improve and clarify a previously proposed categorization of sample selection bias. In particular, we show that unless under very restricted conditions, sample selection bias is a common problem for many real-world situations. We then analyze various effects of sample selection bias on inductive modeling, in particular, how the “true” conditional probability P (y|x) to be modeled by inductive learners can be misrepresented in the biased training data, that subsequently misleads a learning algorithm. To solve inaccuracy problems due to sample selection bias, we explore how to use model averaging of (1) conditional probabilities P (y|x), (2) feature probabilities P (x), and (3) joint probabilities, P (x, y), to reduce the influence of sample selection bias on model accuracy. In particular, we explore on how to use unlabeled data in a semi-supervised learning framework to improve the accuracy of descriptive models constructed from biased training samples. IBM T.J.Watson Research Center, Hawthorne, NY 10532, weifan@us.ibm.com Department of Computer Science, University at Albany, State University of New York, Albany, NY 12222, davidson@cs.albany.edu", "title": "" }, { "docid": "1f05175a0dce51dcd7a1527dce2f1286", "text": "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world powerlaw graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and blockcentric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-ofthe-art distributed graph computing systems.", "title": "" }, { "docid": "a4d315e5cff107329a603c19177259f1", "text": "Despite the fact that different studies have been performed using transcranial direct current stimulation (tDCS) in aphasia, so far, to what extent the stimulation of a cerebral region may affect the activity of anatomically connected regions remains unclear. The authors used a combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) to explore brain areas' excitability modulation before and after active and sham tDCS. Six chronic aphasics underwent 3 weeks of language training coupled with tDCS over the right inferior frontal gyrus. To measure the changes induced by tDCS, TMS-EEG closed to the area stimulated with tDCS were calculated. A significant improvement after tDCS stimulation was found which was accompained by a modification of the EEG over the stimulated region.", "title": "" }, { "docid": "52d6711ebbafd94ab5404e637db80650", "text": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by careful experimentation or modified from a handful of existing networks. We introduce MetaQNN, a meta-modeling algorithm based on reinforcement learning to automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Qlearning with an -greedy exploration strategy and experience replay. The agent explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classification benchmarks, the agent-designed networks (consisting of only standard convolution, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods that use more complex layer types. We also outperform existing meta-modeling approaches for network design on image classification tasks.", "title": "" }, { "docid": "7a992410068d53b06fa1249373e513cc", "text": "In the last few years, new observations by CHANDRA and XMM have shown that Pulsar Wind Nebulae present a complex but similar inner feature, with the presence of axisymmetric rings and jets, which is generally referred as jet-torus structure. Due to the rapid growth in accuracy and robustness of numerical schemes for relativistic fluid-dynamics, it is now possible to model the flow and magnetic structure of the relativistic plasma responsible for the emission. Recent results have clarified how the jet and rings are formed, suggesting that the morphology is strongly related to the wind properties, so that, in principle, it is possible to infer the conditions in the unshocked wind from the nebular emission. I will review here the current status in the modeling of Pulsar Wind Nebulae, and, in particular, how numerical simulations have increased our understanding of the flow structure, observed emission, polarization and spectral properties. I will also point to possible future developments of the present models.", "title": "" }, { "docid": "fdfb71f5905b2af2c01c6b4d1fe23d7e", "text": "Many believe the electric power system is undergoing a profound change driven by a number of needs. There's the need for environmental compliance and energy conservation. We need better grid reliability while dealing with an aging infrastructure. And we need improved operational effi ciencies and customer service. The changes that are happening are particularly signifi cant for the electricity distribution grid, where \"blind\" and manual operations, along with the electromechanical components, will need to be transformed into a \"smart grid.\" This transformation will be necessary to meet environmental targets, to accommodate a greater emphasis on demand response (DR), and to support plug-in hybrid electric vehicles (PHEVs) as well as distributed generation and storage capabilities. It is safe to say that these needs and changes present the power industry with the biggest challenge it has ever faced. On one hand, the transition to a smart grid has to be evolutionary to keep the lights on; on the other hand, the issues surrounding the smart grid are signifi cant enough to demand major changes in power systems operating philosophy.", "title": "" }, { "docid": "61ffc67f0e242afd8979d944cbe2bff4", "text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.", "title": "" }, { "docid": "d9fcfc15c1c310aef6eec96e230074d1", "text": "There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a “balanced” representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.", "title": "" }, { "docid": "e2b98c529a0175758b2edafe284d0dc7", "text": "This paper is concerned with the problem of fuzzy-filter design for discrete-time nonlinear systems in the Takagi-Sugeno (T-S) form. Different from existing fuzzy filters, the proposed ones are designed in finite-frequency domain. First, a so-called finite-frequency l2 gain is defined that extends the standard l2 gain. Then, a sufficient condition for the filtering-error system with a finite-frequency l2 gain is derived. Based on the obtained condition, three fuzzy filters are designed to deal with noises in the low-, middle-, and high-frequency domain, respectively. The proposed fuzzy-filtering method can get a better noise-attenuation performance when frequency ranges of noises are known beforehand. An example about a tunnel-diode circuit is given to illustrate its effectiveness.", "title": "" }, { "docid": "96804634aa7c691aed1eae11d3e44591", "text": "AIMS\nTo investigated the association between the ABO blood group and gestational diabetes mellitus (GDM).\n\n\nMATERIALS AND METHODS\nA retrospective case-control study was conducted using data from 5424 Japanese pregnancies. GDM screening was performed in the first trimester using a casual blood glucose test and in the second trimester using a 50-g glucose challenge test. If the screening was positive, a 75-g oral glucose tolerance test was performed for a GDM diagnosis, which was defined according to the International Association of Diabetes and Pregnancy Study Groups. Logistic regression was used to obtain the odds ratio (OR) and 95% confidence interval (CI) adjusted for traditional risk factors.\n\n\nRESULTS\nWomen with the A blood group (adjusted OR: 0.34, 95% CI: 0.19-0.63), B (adjusted OR: 0.35, 95% CI: 0.18-0.68), or O (adjusted OR: 0.39, 95% CI: 0.21-0.74) were at decreased risk of GDM compared with those with group AB. Women with the AB group were associated with increased risk of GDM as compared with those with A, B, or O (adjusted OR: 2.73, 95% CI: 1.64-4.57).\n\n\nCONCLUSION\nABO blood groups are associated with GDM, and group AB was a risk factor for GDM in Japanese population.", "title": "" }, { "docid": "ea12fe9b91253634422471024f9d28f8", "text": "Maximum and minimum computed across channels is used to monitor the Electroencephalogram signals for possible change of the eye state. Upon detection of a possible change, the last two seconds of the signal is passed through Multivariate Empirical Mode Decomposition and relevant features are extracted. The features are then fed into Logistic Regression and Artificial Neural Network classifiers to confirm the eye state change. The proposed algorithm detects the eye state change with 88.2% accuracy in less than two seconds. This provides a valuable improvement in comparison to a recent procedure that takes about 20 minutes to classify new instances with 97.3% accuracy. The introduced algorithm is promising in the real-time eye state classification as increasing the training examples would increase its accuracy. Published by Elsevier Ltd.", "title": "" } ]
scidocsrr
fb0efe86ab3f84c3e9e45768cebbb6ef
An Application of Fuzzy Concept to Agricultural Farm for Decision Making
[ { "docid": "1093353b15819a11c94467fd8df83ebe", "text": "Multiple Criteria Decision Making (MCDM) shows signs of becoming a maturing field. There are four quite distinct families of methods: (i) the outranking, (ii) the value and utility theory based, (iii) the multiple objective programming, and (iv) group decision and negotiation theory based methods. Fuzzy MCDM has basically been developed along the same lines, although with the help of fuzzy set theory a number of innovations have been made possible; the most important methods are reviewed and a novel approach interdependence in MCDM is introduced.", "title": "" } ]
[ { "docid": "137c30f07ac24f6dafd1429aabe3b931", "text": "Although demonstrated to be efficient and scalable to large-scale data sets, clustering-based recommender systems suffer from relatively low accuracy and coverage. To address these issues, we develop a multiview clustering method through which users are iteratively clustered from the views of both rating patterns and social trust relationships. To accommodate users who appear in two different clusters simultaneously, we employ a support vector regression model to determine a prediction for a given item, based on user-, itemand prediction-related features. To accommodate (cold) users who cannot be clustered due to insufficient data, we propose a probabilistic method to derive a prediction from the views of both ratings and trust relationships. Experimental results on three real-world data sets demonstrate that our approach can effectively improve both the accuracy and coverage of recommendations as well as in the cold start situation, moving clustering-based recommender systems closer towards practical use.", "title": "" }, { "docid": "b64c48d4d2820e01490076c1b18cf32b", "text": "The availability of detailed environmental data, together with inexpensive and powerful computers, has fueled a rapid increase in predictive modeling of species environmental requirements and geographic distributions. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, absence data are not available for most species. In this paper, we introduce the use of the maximum entropy method (Maxent) for modeling species geographic distributions with presence-only data. Maxent is a general-purpose machine learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for species distribution modeling. In mmals: a diction emaining outline eceiver dicating ts present ues horder to investigate the efficacy of the method, here we perform a continental-scale case study using two Neotropical ma lowland species of sloth, Bradypus variegatus, and a small montane murid rodent, Microryzomys minutus. We compared Maxent predictions with those of a commonly used presence-only modeling method, the Genetic Algorithm for Rule-Set Pre (GARP). We made predictions on 10 random subsets of the occurrence records for both species, and then used the r localities for testing. Both algorithms provided reasonable estimates of the species’ range, far superior to the shaded maps available in field guides. All models were significantly better than random in both binomial tests of omission and r operating characteristic (ROC) analyses. The area under the ROC curve (AUC) was almost always higher for Maxent, in better discrimination of suitable versus unsuitable areas for the species. The Maxent modeling approach can be used in i form for many applications with presence-only datasets, and merits further research and development. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "204d6d3327b4c0977a1ceb0d52cdcce4", "text": "Contrasting meaning is a basic aspect of semantics. Recent word-embedding models based on distributional semantics hypothesis are known to be weak for modeling lexical contrast. We present in this paper the embedding models that achieve an F-score of 92% on the widely-used, publicly available dataset, the GRE “most contrasting word” questions (Mohammad et al., 2008). This is the highest performance seen so far on this dataset. Surprisingly at the first glance, unlike what was suggested in most previous work, where relatedness statistics learned from corpora is claimed to yield extra gains over lexicon-based models, we obtained our best result relying solely on lexical resources (Roget’s and WordNet)—corpora statistics did not lead to further improvement. However, this should not be simply taken as that distributional statistics is not useful. We examine several basic concerns in modeling contrasting meaning to provide detailed analysis, with the aim to shed some light on the future directions for this basic semantics modeling problem.", "title": "" }, { "docid": "8c7b6d0ecb1b1a4a612f44e8de802574", "text": "Recently, the Fisher vector representation of local features has attracted much attention because of its effectiveness in both image classification and image retrieval. Another trend in the area of image retrieval is the use of binary feature such as ORB, FREAK, and BRISK. Considering the significant performance improvement in terms of accuracy in both image classification and retrieval by the Fisher vector of continuous feature descriptors, if the Fisher vector were also to be applied to binary features, we would receive the same benefits in binary feature based image retrieval and classification. In this paper, we derive the closed-form approximation of the Fisher vector of binary features which are modeled by the Bernoulli mixture model. In experiments, it is shown that the Fisher vector representation improves the accuracy of image retrieval by 25% compared with a bag of binary words approach.", "title": "" }, { "docid": "1c81694d0b01951aeb6769c43238c830", "text": "Today manipulation of digital images has become easy due to powerful computers, advanced photo-editing software packages and high resolution capturing devices. Verifying the integrity of images and detecting traces of tampering without requiring extra prior knowledge of the image content or any embedded watermarks is an important research field. An attempt is made to survey the recent developments in the field of digital image forgery detection and complete bibliography is presented on blind methods for forgery detection. Blind or passive methods do not need any explicit priori information about the image. First, various image forgery detection techniques are classified and then its generalized structure is developed. An overview of passive image authentication is presented and the existing blind forgery detection techniques are reviewed. The present status of image forgery detection technique is discussed along with a recommendation for future research. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "edef5da72f19adc3e89dd4aaa31b7300", "text": "Highly potent, but poorly water-soluble, drug candidates are common outcomes of contemporary drug discovery programmes and present a number of challenges to drug development — most notably, the issue of reduced systemic exposure after oral administration. However, it is increasingly apparent that formulations containing natural and/or synthetic lipids present a viable means for enhancing the oral bioavailability of some poorly water-soluble, highly lipophilic drugs. This Review details the mechanisms by which lipids and lipidic excipients affect the oral absorption of lipophilic drugs and provides a perspective on the possible future applications of lipid-based delivery systems. Particular emphasis has been placed on the capacity of lipids to enhance drug solubilization in the intestinal milieu, recruit intestinal lymphatic drug transport (and thereby reduce first-pass drug metabolism) and alter enterocyte-based drug transport and disposition.", "title": "" }, { "docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691", "text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS", "title": "" }, { "docid": "d758e5205ccfcb10c652b76b83b5d462", "text": "A broadband five-section symmetrical 3 dB directional coupler has been designed. In order to improve the frequency characteristic of the coupler a recently proposed compensation technique has been used. The developed coupler operates in 2–12 GHz frequency band and can be applied in a broadband 4×4 Butler matrix.", "title": "" }, { "docid": "bce3143cc1ba21c34ebe5d1b596731f9", "text": "Memory errors in C and C++ programs continue to be one of the dominant sources of security problems, accounting for over a third of the high severity vulnerabilities reported in 2011. Wide-spread deployment of defenses such as address-space layout randomization (ASLR) have made memory exploit development more difficult, but recent trends indicate that attacks are evolving to overcome this defense. Techniques for systematic detection and blocking of memory errors can provide more comprehensive protection that can stand up to skilled adversaries, but unfortunately, these techniques introduce much higher overheads and provide significantly less compatibility than ASLR. We propose a new memory error detection technique that explores a part of the design space that trades off some ability to detect bounds errors in order to obtain good performance and excellent backwards compatibility. On the SPECINT 2000 benchmark, the runtime overheads of our technique is about half of that reported by the fastest previous bounds-checking technique. On the compatibility front, our technique has been tested on over 7 million lines of code, which is much larger than that reported for previous bounds-checking techniques.", "title": "" }, { "docid": "2d6523ef6609c11274449d3b9a57c53c", "text": "Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.", "title": "" }, { "docid": "e077bb23271fbc056290be84b39a9fcc", "text": "Rovers will continue to play an important role in planetary exploration. Plans include the use of the rocker-bogie rover configuration. Here, models of the mechanics of this configuration are presented. Methods for solving the inverse kinematics of the system and quasi-static force analysis are described. Also described is a simulation based on the models of the rover’s performance. Experimental results confirm the validity of the models.", "title": "" }, { "docid": "02bae85905793e75950acbe2adcc6a7b", "text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.", "title": "" }, { "docid": "64ce725037b72921b979583f6fdc4f27", "text": "We describe an approach to object retrieval which searches for and localizes all the occurrences of an object in a video, given a query image of the object. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject those that are unstable. Efficient retrieval is achieved by employing methods from statistical text retrieval, including inverted file systems, and text and document frequency weightings. This requires a visual analogy of a word which is provided here by vector quantizing the region descriptors. The final ranking also depends on the spatial layout of the regions. The result is that retrieval is immediate, returning a ranked list of shots in the manner of Google. We report results for object retrieval on the full length feature films ‘Groundhog Day’ and ‘Casablanca’.", "title": "" }, { "docid": "9cd00d9975c1efa741d1b01200a7d660", "text": "BACKGROUND\nMany ethical problems exist in nursing homes. These include, for example, decision-making in end-of-life care, use of restraints and a lack of resources.\n\n\nAIMS\nThe aim of the present study was to investigate nursing home staffs' opinions and experiences with ethical challenges and to find out which types of ethical challenges and dilemmas occur and are being discussed in nursing homes.\n\n\nMETHODS\nThe study used a two-tiered approach, using a questionnaire on ethical challenges and systematic ethics work, given to all employees of a Norwegian nursing home including nonmedical personnel, and a registration of systematic ethics discussions from an Austrian model of good clinical practice.\n\n\nRESULTS\nNinety-one per cent of the nursing home staff described ethical problems as a burden. Ninety per cent experienced ethical problems in their daily work. The top three ethical challenges reported by the nursing home staff were as follows: lack of resources (79%), end-of-life issues (39%) and coercion (33%). To improve systematic ethics work, most employees suggested ethics education (86%) and time for ethics discussion (82%). Of 33 documented ethics meetings from Austria during a 1-year period, 29 were prospective resident ethics meetings where decisions for a resident had to be made. Agreement about a solution was reached in all 29 cases, and this consensus was put into practice in all cases. Residents did not participate in the meetings, while relatives participated in a majority of case discussions. In many cases, the main topic was end-of-life care and life-prolonging treatment.\n\n\nCONCLUSIONS\nLack of resources, end-of-life issues and coercion were ethical challenges most often reported by nursing home staff. The staff would appreciate systematic ethics work to aid decision-making. Resident ethics meetings can help to reach consensus in decision-making for nursing home patients. In the future, residents' participation should be encouraged whenever possible.", "title": "" }, { "docid": "2ffca8ee12f4266f42dc27ad430e4b62", "text": "The growing concern over environmental degradation resulting from combustion of fossil fuels and depleting fossil fuel reserves has raised awareness about alternative energy options. Renewable energy system is perfect solution of this problem. This paper presents a mathematical model of single diode solar photovoltaic (SPV) module. SPV cell generates electricity when exposed to sunlight but this generation depends on whether condition like temperature and irradiance, for better accuracy all the parameters are considered including shunt, series resistance and simulated in MATLAB/Simulink. The output is analyzed by varying the temperature and irradiance and effect of change in shunt and series resistance is also observed.", "title": "" }, { "docid": "bae6122c5dd234ec24ed5efd030a5e83", "text": "This paper presents a novel computer-aided diagnosis (CAD) technique for the early diagnosis of the Alzheimer's disease (AD) based on nonnegative matrix factorization (NMF) and support vector machines (SVM) with bounds of confidence. The CAD tool is designed for the study and classification of functional brain images. For this purpose, two different brain image databases are selected: a single photon emission computed tomography (SPECT) database and positron emission tomography (PET) images, both of them containing data for both Alzheimer's disease (AD) patients and healthy controls as a reference. These databases are analyzed by applying the Fisher discriminant ratio (FDR) and nonnegative matrix factorization (NMF) for feature selection and extraction of the most relevant features. The resulting NMF-transformed sets of data, which contain a reduced number of features, are classified by means of a SVM-based classifier with bounds of confidence for decision. The proposed NMF-SVM method yields up to 91% classification accuracy with high sensitivity and specificity rates (upper than 90%). This NMF-SVM CAD tool becomes an accurate method for SPECT and PET AD image classification.", "title": "" }, { "docid": "bfbe4db13bfd1980aaae4cdf9e978e63", "text": "We establish in 2D, the PDE associated with a classical debluring filter, the Kramer operator and compare it with another classical shock filter.", "title": "" }, { "docid": "4e08aba1ff8d0a5d0d23763dad627cb8", "text": "ion Real systems are di cult to specify and verify without abstrac tions We need to identify di erent kinds of abstractions perhaps tailored for certain kinds of systems or problem domains and we need to develop ways to justify them formally perhaps using mechanical help Reusable models and theories Rather than de ning models and theories from scratch each time a new application is tackled it would be better to have reusable and parameterized models and theories Combinations of mathematical theories Many safety critical systems have both digital and analog components These hybrid systems require reasoning about both discrete and continuous mathematics System developers would like to be able to predict how well their system will operate in the eld Indeed they often care more about performance than cor rectness Performance modeling borrows strongly from probability statistics and queueing theory Data structures and algorithms To handle larger search spaces and larger systems new data structures and algorithms e g more concise data structures for representing boolean functions are needed", "title": "" }, { "docid": "8ca8d0bb6ef41b10392e5d64ca96d2ab", "text": "This longitudinal study provides an analysis of the relationship between personality traits and work experiences with a special focus on the relationship between changes in personality and work experiences in young adulthood. Longitudinal analyses uncovered 3 findings. First, measures of personality taken at age 18 predicted both objective and subjective work experiences at age 26. Second, work experiences were related to changes in personality traits from age 18 to 26. Third, the predictive and change relations between personality traits and work experiences were corresponsive: Traits that \"selected\" people into specific work experiences were the same traits that changed in response to those same work experiences. The relevance of the findings to theories of personality development is discussed.", "title": "" } ]
scidocsrr
0ff8099bf055016595ac7499c1c8a55d
The WiLI benchmark dataset for written language identification
[ { "docid": "af85d7541ecd30d95236bb8779b7c9ab", "text": "The paper presents a Markov chain-based method for automatic written language identification. Given a training document in a specific language, each word can be represented as a Markov chain of letters. Using the entire training document regarded as a set of Markov chains, the set of initial and transition probabilities can be calculated and referred to as a Markov model for that language. Given an unknown language string, the maximum likelihood decision rule was used to identify language. Experimental results showed that the proposed method achieved lower error rate and faster identification speed than the current n-gram method.", "title": "" }, { "docid": "90b1d0a8670e74ff3549226acd94973e", "text": "Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.", "title": "" }, { "docid": "0ce82ead0954b99d811b9f50eee76abc", "text": "Convolutional Neural Networks (CNNs) dominate various computer vision tasks since Alex Krizhevsky showed that they can be trained effectively and reduced the top-5 error from 26.2 % to 15.3 % on the ImageNet large scale visual recognition challenge. Many aspects of CNNs are examined in various publications, but literature about the analysis and construction of neural network architectures is rare. This work is one step to close this gap. A comprehensive overview over existing techniques for CNN analysis and topology construction is provided. A novel way to visualize classification errors with confusion matrices was developed. Based on this method, hierarchical classifiers are described and evaluated. Additionally, some results are confirmed and quantified for CIFAR-100. For example, the positive impact of smaller batch sizes, averaging ensembles, data augmentation and test-time transformations on the accuracy. Other results, such as the positive impact of learned color transformation on the test accuracy could not be confirmed. A model which has only one million learned parameters for an input size of 32× 32× 3 and 100 classes and which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed.", "title": "" } ]
[ { "docid": "24bd9a2f85b33b93609e03fc67e9e3a9", "text": "With the rapid development of high-throughput technologies, researchers can sequence the whole metagenome of a microbial community sampled directly from the environment. The assignment of these metagenomic reads into different species or taxonomical classes is a vital step for metagenomic analysis, which is referred to as binning of metagenomic data. In this paper, we propose a new method TM-MCluster for binning metagenomic reads. First, we represent each metagenomic read as a set of \"k-mers\" with their frequencies occurring in the read. Then, we employ a probabilistic topic model -- the Latent Dirichlet Allocation (LDA) model to the reads, which generates a number of hidden \"topics\" such that each read can be represented by a distribution vector of the generated topics. Finally, as in the MCluster method, we apply SKWIC -- a variant of the classical K-means algorithm with automatic feature weighting mechanism to cluster these reads represented by topic distributions. Experiments show that the new method TM-MCluster outperforms major existing methods, including AbundanceBin, MetaCluster 3.0/5.0 and MCluster. This result indicates that the exploitation of topic modeling can effectively improve the binning performance of metagenomic reads.", "title": "" }, { "docid": "4018c4183c2f60d98c7fdaa21fb17379", "text": "Algebraic key establishment protocols based on the difficulty of solving equations over algebraic structures are described as a theoretical basis for constructing public–key cryptosystems.", "title": "" }, { "docid": "bb008d90a8e5ea4262afc0cf784ccbb8", "text": "*Correspondence to: Michaël Messaoudi; Email: mmessaoudi@etap-lab.com In a recent clinical study, we demonstrated in the general population that Lactobacillus helveticus R0052 and Bifidobacterium longum R0175 (PF) taken in combination for 30 days decreased the global scores of hospital anxiety and depression scale (HADs), and the global severity index of the Hopkins symptoms checklist (HSCL90), due to the decrease of the sub-scores of somatization, depression and angerhostility spheres. Therefore, oral intake of PF showed beneficial effects on anxiety and depression related behaviors in human volunteers. From there, it is interesting to focus on the role of this probiotic formulation in the subjects with the lowest urinary free cortisol levels at baseline. This addendum presents a secondary analysis of the effects of PF in a subpopulation of 25 subjects with urinary free cortisol (UFC) levels less than 50 ng/ml at baseline, on psychological distress based on the percentage of change of the perceived stress scale (PSs), the HADs and the HSCL-90 scores between baseline and follow-up. The data show that PF improves the same scores as in the general population (the HADs global score, the global severity index of the HSCL-90 and three of its sub-scores, i.e., somatization, depression and anger-hostility), as well as the PSs score and three other subscores of the HSCL-90, i.e., “obsessive compulsive,” “anxiety” and “paranoidideation.” Moreover, in the HSCL-90, Beneficial psychological effects of a probiotic formulation (Lactobacillus helveticus R0052 and Bifidobacterium longum R0175) in healthy human volunteers", "title": "" }, { "docid": "fc7c7828428a4018a8aaddaff4eb5b3f", "text": "Data mining is comprised of many data analysis techniques. Its basic objective is to discover the hidden and useful data pattern from very large set of data. Graph mining, which has gained much attention in the last few decades, is one of the novel approaches for mining the dataset represented by graph structure. Graph mining finds its applications in various problem domains, including: bioinformatics, chemical reactions, Program flow structures, computer networks, social networks etc. Different data mining approaches are used for mining the graph-based data and performing useful analysis on these mined data. In literature various graph mining approaches have been proposed. Each of these approaches is based on either classification; clustering or decision trees data mining techniques. In this study, we present a comprehensive review of various graph mining techniques. These different graph mining techniques have been critically evaluated in this study. This evaluation is based on different parameters. In our future work, we will provide our own classification based graph mining technique which will efficiently and accurately perform mining on the graph structured data.", "title": "" }, { "docid": "6b3c462008743d69951053b8a77944d7", "text": "Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models “forget” how to perform the first task. This is widely believed to be a serious problem for neural networks. Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions. We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting. We find that it is always best to train using the dropout algorithm– the dropout algorithm is consistently best at adapting to the new task, remembering the old task, and has the best tradeoff curve between these two extremes. We find that different tasks and relationships between tasks result in very different rankings of activation function performance. This suggests that the choice of activation function should always be cross-validated.", "title": "" }, { "docid": "071988211041ce8a9b1b3c5feed6c4dc", "text": "In many machine learning algorithms, a major assumption is that the training and the test samples are in the same feature space and have the same distribution. However, for many real applications this assumption does not hold. In this paper, we survey the problem where the training samples and the test samples are from different distributions. This problem can be referred as domain adaptation. The training samples, always with labels, are obtained from what is called source domains, while the test samples, which usually have no labels or only a few labels, are obtained from what is called target domains. The source domains and the target domains are different but related to some extent; the learners can learn some information from the source domains for the learning of the target domains. We focus on the multisource domain adaptation problem where there is more than one source domain available together with only one target domain. A key issue is how to select good sources and samples for the adaptation. In this survey, we review some theoretical results and well developed algorithms for the multi-source domain adaptation problem. We also discuss some open problems which can be explored in future work. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e4aeb9f472b9e81691472c17da95e9df", "text": "A novel high-gain active composite right/left-handed (CRLH) metamaterial leaky-wave antenna (LWA) is presented. This antenna, which is designed to operate at broadside, is constituted by passive CRLH leaky-wave sections interconnected by amplifiers, which regenerate the power progressively leaked out of the structure in the radiation process in order to increase the effective aperture of the antenna and thereby its gain. The gain is further enhanced by a matching regeneration effect induced by the quasi-unilateral nature of the amplifiers. Both the cases of quasi-uniform and binomial field distributions, corresponding to maximum directivity and minimum side-lobe level, respectively, have been described. An active LWA prototype is demonstrated in transmission mode with a gain enhancement of 8.9 dB compared to its passive counterpart. The proposed antenna can attain an arbitrarily high gain by simple increase of the length of the structure, without penalty in terms of return loss and without requiring a complicated feeding network like conventional array antennas", "title": "" }, { "docid": "8ba2b376995e3a6a02720a73012d590b", "text": "This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified. We extend LEACH’s stochastic clusterhead selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30 % can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).", "title": "" }, { "docid": "67313cf39c546b406b82830e3c03f5c9", "text": "Brand communities and social media often overlap. Social media is an ideal environment for building brand communities. However, there is limited research about the benefits and consequences of brand communities established on social media platforms. This study addresses this issue by developing a model depicting how consumers’ relationship with the elements of a brand community based on social media (i.e. brand, product, company, and other consumers) influence brand trust. The findings include that three of the four relationships positively influence brand trust. However, customer-other customers’ relationships negatively influence brand trust, which is counter intuitive and interesting. The prominent role of engagement in a brand community is also investigated in the model. Community engagement amplifies the strength of the relationships consumers make with the elements of brand community and it has a moderating effect in translating the effects of such relationships on brand trust. Finally, theoretical and managerial implications are discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "310f13dac8d7cf2d1b40878ef6ce051b", "text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.", "title": "" }, { "docid": "51ea8936c266077b1522d1d953d356ec", "text": "Speech data typically contains task irrelevant information lying within features. Specifically, phonetic information, speaker characteristic information, emotional information and noise are always mixed together and tend to impair one another for certain task. We propose a new type of auto-encoder for feature learning called contrastive auto-encoder. Unlike other variants of auto-encoders, contrastive auto-encoder is able to leverage class labels in constructing its representation layer. We achieve this by modeling two autoencoders together and making their differences contribute to the total loss function. The transformation built with contrastive auto-encoder can be seen as a task-specific and invariant feature learner. Our experiments on TIMIT clearly show the superiority of the feature extracted from contrastive auto-encoder over original acoustic feature, feature extracted from deep auto-encoder, and feature extracted from a model that contrastive auto-encoder originates from.", "title": "" }, { "docid": "0a23995317063e773c3ac69cfd6b8e70", "text": "This paper proposes a temporal tracking algorithm based on Random Forest that uses depth images to estimate and track the 3D pose of a rigid object in real-time. Compared to the state of the art aimed at the same goal, our algorithm holds important attributes such as high robustness against holes and occlusion, low computational cost of both learning and tracking stages, and low memory consumption. These are obtained (a) by a novel formulation of the learning strategy, based on a dense sampling of the camera viewpoints and learning independent trees from a single image for each camera view, as well as, (b) by an insightful occlusion handling strategy that enforces the forest to recognize the object's local and global structures. Due to these attributes, we report state-of-the-art tracking accuracy on benchmark datasets, and accomplish remarkable scalability with the number of targets, being able to simultaneously track the pose of over a hundred objects at 30~fps with an off-the-shelf CPU. In addition, the fast learning time enables us to extend our algorithm as a robust online tracker for model-free 3D objects under different viewpoints and appearance changes as demonstrated by the experiments.", "title": "" }, { "docid": "15dc2cd497f782d16311cd0e658e2e90", "text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.", "title": "" }, { "docid": "42a412b11300ec8d7721c1f532dadfb9", "text": " Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as current data-driven transition-based dependency parsing frameworks, but outputs directed acyclic graphs (DAGs). We demonstrate the benefits of DAG parsing in two experiments where its advantages over dependency tree parsing can be clearly observed: predicate-argument analysis of English and syntactic analysis of Danish with a representation that includes long-distance dependencies and anaphoric reference links.", "title": "" }, { "docid": "5d6cb3669a277e0aed4f75506f158dd5", "text": "The following sections will apply the foregoing induction systems to three specific types of problems, and discuss the “reasonableness” of the results obtained. Section 4.1 deals with the Bernoulli sequence. The predictions obtained are identical to those given by “Laplace’s Rule of Succession.” A particularly important technique is used to code the original sequence into a set of integers which constitute its “descriptions” for the problems of Sections 4.2 and 4.3. Section 4.2 deals with the extrapolation of a sequence in which there are certain kinds of intersymbol constraints. Codes for such sequences are devised by defining special symbols for subsequences whose frequencies are unusually high or low. Some properties of this coding method are discussed, and they are found to be intuitively reasonable. A preliminary computer program has been written for induction using this coding method. However, there are some important simplifications used in the program, and it is uncertain as to whether it can make useful predictions. Section 4.3 describes the use of phrase structure grammars for induction. A formal solution is presented and although the resultant analysis indicates that this model conforms to some extent to intuitive expectations, the author feels that it still has at least one serious shortcoming in that it has no good means", "title": "" }, { "docid": "2c832dea09e5fc622a5c1bbfdb53f8b2", "text": "A recent meta-analysis (S. Vazire & D. C. Funder, 2006) suggested that narcissism and impulsivity are related and that impulsivity partially accounts for the relation between narcissism and self-defeating behaviors (SDB). This research examines these hypotheses in two studies and tests a competing hypothesis that Extraversion and Agreeableness account for this relation. In Study 1, we examined the relations among narcissism, impulsivity, and aggression. Both narcissism and impulsivity predicted aggression, but impulsivity did not mediate the narcissism-aggression relation. In Study 2, narcissism was related to a measure of SDB and manifested divergent relations with a range of impulsivity traits from three measures. None of the impulsivity models accounted for the narcissism-SDB relation, although there were unique mediating paths for traits related to sensation and fun seeking. The domains of Extraversion and low Agreeableness successfully mediated the entire narcissism-SDB relation. We address the discrepancy between the current and meta-analytic findings.", "title": "" }, { "docid": "7a5edda3bc5b271b6c1305c6a13d50eb", "text": "Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.", "title": "" }, { "docid": "279302300cbdca5f8d7470532928f9bd", "text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.", "title": "" }, { "docid": "b5a64e072961be91e6ee92e8a6689596", "text": "Cortical bone supports and protects our skeletal functions and it plays an important in determining bone strength and fracture risks. Cortical bone segmentation is needed for quantitative analyses and the task is nontrivial for in vivo multi-row detector CT (MD-CT) imaging due to limited resolution and partial volume effects. An automated cortical bone segmentation algorithm for in vivo MD-CT imaging of distal tibia is presented. It utilizes larger contextual and topologic information of the bone using a modified fuzzy distance transform and connectivity analyses. An accuracy of 95.1% in terms of volume of agreement with true segmentations and a repeat MD-CT scan intra-class correlation of 98.2% were observed in a cadaveric study. An in vivo study involving 45 age-similar and height-matched pairs of male and female volunteers has shown that, on an average, male subjects have 16.3% thicker cortex and 4.7% increased porosity as compared to females.", "title": "" }, { "docid": "986a0b910a4674b3c4bf92a668780dd6", "text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.", "title": "" } ]
scidocsrr
63d25ad75a14b215eda8b1992cab6ad3
Bayesian evidence and model selection
[ { "docid": "93297115eb5153a41a79efe582bd34b1", "text": "Abslract Bayesian probabilily theory provides a unifying framework for dara modelling. In this framework the overall aims are to find models that are well-matched to, the &a, and to use &se models to make optimal predictions. Neural network laming is interpreted as an inference of the most probable parameters for Ihe model, given the training data The search in model space (i.e., the space of architectures, noise models, preprocessings, regularizes and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. This review describes practical techniques based on G ~ ~ s s ~ M approximations for implementation of these powerful methods for controlling, comparing and using adaptive network$.", "title": "" } ]
[ { "docid": "b544aec3db71397c3b81851e8d770fda", "text": "A novel substrate integrated waveguide (SIW) slot antenna having folded corrugated stubs is proposed for suppressing the backlobes of the SIW slot antenna associated with the diffraction of the spillover current. The longitudinal array of the folded stubs replacing the SIW via-holes effectively prevents the propagation of the surface spillover current. The measured front-to-back ratio (FTBR) has been greatly (15 dB) improved from that of the common SIW slot antenna. We expect that the proposed folded corrugated SIW (FCSIW) slot antenna plays an important role for reducing the excessive backside radiation of the SIW slot antenna and for decreasing mutual coupling in SIW slot antenna arrays.", "title": "" }, { "docid": "994194689f025c2ed44157e127baaa79", "text": "Using information systems effectively requires an understanding of the organisation, management, and the technology shaping the systems. All information systems can be described as organisational and management solutions to challenges posed by the environment. The advances in information systems have affect on our day-to day lives . As the technology is evolving immensely so are the opportunities in a healthy way to prepare the organisation in the competitive advantage environment In order to manage the IS/IT based systems, it is important to have an appropriate strategy that defines the systems and provide means to manage them. Strategic Information Systems Alignment (SISA) is an effective way of developing and maintaining the IS/IT systems that support the business operations. Alignment of the IS/IT plans and the business plans is essential for improved business performance, this research looks at the key features of SISA in the changing business circumstances in Saudi Arabia. Keywords—Information Systems, Information Systems, Business Planning, Planning Strategy, IT/IS Alignment.", "title": "" }, { "docid": "298eb9ced049a3316acdb3ead870aca9", "text": "This paper puts forth a method for discovering computationally-derived conceptual spaces that reflect human conceptualization of musical and poetic creativity. We describe a lexical space that is defined through co-occurrence statistics, and compare the dimensions of this space with human responses on a word association task. Participants’ responses serve as external validation of our computational findings, and frequent terms are also used as input dimensions for creating mappings from the linguistic to the conceptual domain. This novel method finds low-dimensional subspaces that represent particular conceptual regions within a vector space model of distributional semantics. Word-vectors from these discovered conceptual spaces are considered, and argued to be useful for the evaluation of creativity and creative artifacts within computational creativity.", "title": "" }, { "docid": "20d95255d3cf72174cbdc6f8614796a5", "text": "This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modi cations of feature learning algorithms to take into account the challenges present in time-series data.", "title": "" }, { "docid": "46465926afb62b9f73386a962047875d", "text": "Cervical cancer represents the second leading cause of death for women worldwide. The importance of the diet and its impact on specific types of neoplasia has been highlighted, focusing again interest in the analysis of dietary phytochemicals. Polyphenols have shown a wide range of cellular effects: they may prevent carcinogens from reaching the targeted sites, support detoxification of reactive molecules, improve the elimination of transformed cells, increase the immune surveillance and the most important factor is that they can influence tumor suppressors and inhibit cellular proliferation, interfering in this way with the steps of carcinogenesis. From the studies reviewed in this paper, it is clear that certain dietary polyphenols hold great potential in the prevention and therapy of cervical cancer, because they interfere in carcinogenesis (in the initiation, development and progression) by modulating the critical processes of cellular proliferation, differentiation, apoptosis, angiogenesis and metastasis. Specifically, polyphenols inhibit the proliferation of HPV cells, through induction of apoptosis, growth arrest, inhibition of DNA synthesis and modulation of signal transduction pathways. The effects of combinations of polyphenols with chemotherapy and radiotherapy used in the treatment of cervical cancer showed results in the resistance of cervical tumor cells to chemo- and radiotherapy, one of the main problems in the treatment of cervical neoplasia that can lead to failure of the treatment because of the decreased efficiency of the therapy.", "title": "" }, { "docid": "50d0b1e141bcea869352c9b96b0b2ad5", "text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.", "title": "" }, { "docid": "56d4c6f8be025bbbe0ae0f678528e778", "text": "In this paper, we study the spatial pattern matching (SPM) query. Given a set D of spatial objects (e.g., houses and shops), each with a textual description, we aim at finding all combinations of objects from D that match a user-defined spatial pattern P. A pattern P is a graph where vertices represent spatial objects, and edges denote distance relationships between them. The SPM query returns the instances that satisfy P. An example of P can be \"a house within 10-minute walk from a school, which is at least 2km away from a hospital\". The SPM query can benefit users such as house buyers, urban planners, and archaeologists. We prove that answering such queries is computationally intractable, and propose two efficient algorithms for their evaluation. Extensive experimental evaluation and cases studies on four real datasets show that our proposed solutions are highly effective and efficient.", "title": "" }, { "docid": "7e6280b0c4b2e100f7219d2b463d9961", "text": "Big Data era is upon us, a huge amount of data is generated daily, analyzing and making use of this huge amount of information is a top priority for all kinds of businesses. However, one of the most important problems that hinders the unanimous adoption of Big Data is the lack of security and privacy protection of information in the Big Data tools. In this paper we contribute to reinforcing the security of Big Data platforms by proposing a blockchain-based access control framework. We define the concept of blockchain and breakdown the mechanism and principles of the access control framework.", "title": "" }, { "docid": "52462bd444f44910c18b419475a6c235", "text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).", "title": "" }, { "docid": "ee6548df916820bdddd44f5e4216f09b", "text": "With the proliferation of large-scale community-contributed images, hashing-based approximate nearest neighbor search in huge databases has aroused considerable interest from the fields of computer vision and multimedia in recent years because of its computational and memory efficiency. In this paper, we propose a novel hashing method named neighborhood discriminant hashing (NDH) (for short) to implement approximate similarity search. Different from the previous work, we propose to learn a discriminant hashing function by exploiting local discriminative information, i.e., the labels of a sample can be inherited from the neighbor samples it selects. The hashing function is expected to be orthogonal to avoid redundancy in the learned hashing bits as much as possible, while an information theoretic regularization is jointly exploited using maximum entropy principle. As a consequence, the learned hashing function is compact and nonredundant among bits, while each bit is highly informative. Extensive experiments are carried out on four publicly available data sets and the comparison results demonstrate the outperforming performance of the proposed NDH method over state-of-the-art hashing techniques.", "title": "" }, { "docid": "152e8d51669b095dab15fa509d9ce9f8", "text": "Virtualization technology plays a vital role in cloud computing. In particular, benefits of virtualization are widely employed in high performance computing (HPC) applications. Recently, virtual machines (VMs) and Docker containers known as two virtualization platforms need to be explored for developing applications efficiently. We target a model for deploying distributed applications on Docker containers, among using well-known benchmarks to evaluate performance between VMs and containers. Based on their architecture, we propose benchmark scenarios to analyze the computing performance and the ability of data access on HPC system. Remarkably, Docker container has more advantages than virtual machine in terms of data intensive application and computing ability, especially the overhead of Docker is trivial. However, Docker architecture has some drawbacks in resource management. Our experiment and evaluation show how to deploy efficiently high performance computing applications on Docker containers and VMs.", "title": "" }, { "docid": "4d5461e076839bf2364a190808959acb", "text": "environment, are becoming increasingly prevalent. However, if agents are to behave intelligently in complex, dynamic, and noisy environments, we believe that they must be able to learn and adapt. The reinforcement learning (RL) paradigm is a popular way for such agents to learn from experience with minimal feedback. One of the central questions in RL is how best to generalize knowledge to successfully learn and adapt. In reinforcement learning problems, agents sequentially observe their state and execute actions. The goal is to maximize a real-valued reward signal, which may be time delayed. For example, an agent could learn to play a game by being told what the state of the board is, what the legal actions are, and then whether it wins or loses at the end of the game. However, unlike in supervised learning scenarios, the agent is never provided the “correct” action. Instead, the agent can only gather data by interacting with an environment, receiving information about the results, its actions, and the reward signal. RL is often used because of the framework’s flexibility and due to the development of increasingly data-efficient algorithms. RL agents learn by interacting with the environment, gathering data. If the agent is virtual and acts in a simulated environment, training data can be collected at the expense of computer time. However, if the agent is physical, or the agent must act on a “real-world” problem where the online reward is critical, such data can be expensive. For instance, a physical robot will degrade over time and must be replaced, and an agent learning to automate a company’s operations may lose money while training. When RL agents begin learning tabula rasa, mastering difficult tasks may be infeasible, as they require significant amounts of data even when using state-of-the-art RL approaches. There are many contemporary approaches to speed up “vanilla” RL methods. Transfer learning (TL) is one such technique. Transfer learning is an umbrella term used when knowledge is Articles", "title": "" }, { "docid": "22714a54522bf945fded68b85a6b5d80", "text": "The cpquantile of an ordered sequence of data values is the element with rank ‘pn, where n is the total number of values. Accurate estimates of quantiles are required for the solution of many practical problems. In this paper, we present a new algorithm for estimating the quantile values for disk-resident data. Our algorithm has the following characteristics: (1) It requires only one pass over the data; (2) It is deterministic; (3) It produces good lower and upper bounds of the true values of the quantiles; (4) It requires no a priori knowledge of the distribution of the data set; (5) It has a scalable parallel formulation; (6) Extra time and memory for computing additional quantiles (beyond the first one) are constant per quantile. We present experimental results on the IBM SP-2. The experimental results show that the algorithm is indeed robust and does not depend on the distribution of the data sets.", "title": "" }, { "docid": "7ef13dad7c0151db4607d619f7bb98a6", "text": "Graph coloring problem is a well-known NP-complete problem and there are many approaches proposed to solve this problem. For a graph coloring algorithm to be efficient, it must color the input graph with minimum colors and must also find the solution in the minimum possible time. Heuristic approaches emphasize on the time complexity while the exact approaches concentrate on the number of colors used. Here, we proposed an approach which solves the graph coloring problem more efficiently by providing minimum number of colors with effectively lesser time than that of the fastest exact algorithm till date. In our approach, we exploit the concept of maximal independent sets using trees. We tested our algorithm on various DIMACS graphs (up to 10000 vertices) and found the results (in terms of time and colors) much more efficient than the existing.", "title": "" }, { "docid": "a3db8f51d9dfa6608677d63492d2fb6f", "text": "In this article, we introduce nonlinear versions of the popular structure tensor, also known as second moment matrix. These nonlinear structure tensors replace the Gaussian smoothing of the classical structure tensor by discontinuity-preserving nonlinear diffusions. While nonlinear diffusion is a well-established tool for scalar and vector-valued data, it has not often been used for tensor images so far. Two types of nonlinear diffusion processes for tensor data are studied: an isotropic one with a scalar-valued diffusivity, and its anisotropic counterpart with a diffusion tensor. We prove that these schemes preserve the positive semidefiniteness of a matrix field and are, therefore, appropriate for smoothing structure tensor fields. The use of diffusivity functions of total variation (TV) type allows us to construct nonlinear structure tensors without specifying additional parameters compared to the conventional structure tensor. The performance of nonlinear structure tensors is demonstrated in three fields where the classic structure tensor is frequently used: orientation estimation, optic flow computation, and corner detection. In all these cases, the nonlinear structure tensors demonstrate their superiority over the classical linear one. Our experiments also show that for corner detection based on nonlinear structure tensors, anisotropic nonlinear tensors give the most precise localisation. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "72607f5a6371e1d3e390c93bd0dff25b", "text": "In this paper we present ASPOGAMO, a vision system capable of estimating motion trajectories of soccer players taped on video. The system performs well in a multitude of application scenarios because of its adaptivity to various camera setups, such as single or multiple camera settings, static or dynamic ones. Furthermore, ASPOGAMO can directly process image streams taken from TV broadcast, and extract all valuable information despite scene interruptions and cuts between different cameras. The system achieves a high level of robustness through the use of modelbased vision algorithms for camera estimation and player recognition and a probabilistic multi-player tracking framework capable of dealing with occlusion situations typical in team-sports. The continuous interplay between these submodules is adding to both the reliability and the efficiency of the overall system.", "title": "" }, { "docid": "93e93d2278706638859f5f4b1601bfa6", "text": "To acquire accurate, real-time hyperspectral images with high spatial resolution, we develop two types of low-cost, lightweight Whisk broom hyperspectral sensors that can be loaded onto lightweight unmanned autonomous vehicle (UAV) platforms. A system is composed of two Mini-Spectrometers, a polygon mirror, references for sensor calibration, a GPS sensor, a data logger and a power supply. The acquisition of images with high spatial resolution is realized by a ground scanning along a direction perpendicular to the flight direction based on the polygon mirror. To cope with the unstable illumination condition caused by the low-altitude observation, skylight radiation and dark current are acquired in real-time by the scanning structure. Another system is composed of 2D optical fiber array connected to eight Mini-Spectrometers and a telephoto lens, a convex lens, a micro mirror, a GPS sensor, a data logger and a power supply. The acquisition of images is realized by a ground scanning based on the rotation of the micro mirror.", "title": "" }, { "docid": "2746379baa4c59fae63dc92a9c8057bc", "text": "Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011[1] to discuss the opportunities and challenges posed by Big Data for the Semantic Web, Semantic Technologies, and Database communities. The unanimous conclusion was that the greatest shared challenge was not only engineering Big Data, but also doing so meaningfully. The following are four expressions of that challenge from different perspectives.", "title": "" }, { "docid": "7bd5a1ce9db81d50f1802db0a6623e92", "text": "Goal-Oriented (GO) Dialogue Systems, colloquially known as goal oriented chatbots, help users achieve a predefined goal (e.g. book a movie ticket) within a closed domain. A first step is to understand the user’s goal by using natural language understanding techniques. Once the goal is known, the bot must manage a dialogue to achieve that goal, which is conducted with respect to a learnt policy. The success of the dialogue system depends on the quality of the policy, which is in turn reliant on the availability of high-quality training data for the policy learning method, for instance Deep Reinforcement Learning. Due to the domain specificity, the amount of available data is typically too low to allow the training of good dialogue policies. In this master thesis we introduce a transfer learning method to mitigate the effects of the low in-domain data availability. Our transfer learning based approach improves the bot’s success rate by 20% in relative terms for distant domains and we more than double it for close domains, compared to the model without transfer learning. Moreover, the transfer learning chatbots learn the policy up to 5 to 10 times faster. Finally, as the transfer learning approach is complementary to additional processing such as warm-starting, we show that their joint application gives the best outcomes.", "title": "" }, { "docid": "651db77789c5f5edaa933534255c88d6", "text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.", "title": "" } ]
scidocsrr
2e346393e0424137069068e39ae9b926
Architectural Style Classification of Building Facade Windows
[ { "docid": "47c05e54488884854e6bcd5170ed65e8", "text": "This work is about a novel methodology for window detection in urban environments and its multiple use in vision system applications. The presented method for window detection includes appropriate early image processing, provides a multi-scale Haar wavelet representation for the determination of image tiles which is then fed into a cascaded classifier for the task of window detection. The classifier is learned from a Gentle Adaboost driven cascaded decision tree on masked information from training imagery and is tested towards window based ground truth information which is together with the original building image databases publicly available. The experimental results demonstrate that single window detection is to a sufficient degree successful, e.g., for the purpose of building recognition, and, furthermore, that the classifier is in general capable to provide a region of interest operator for the interpretation of urban environments. The extraction of this categorical information is beneficial to index into search spaces for urban object recognition as well as aiming towards providing a semantic focus for accurate post-processing in 3D information processing systems. Targeted applications are (i) mobile services on uncalibrated imagery, e.g. , for tourist guidance, (ii) sparse 3D city modeling, and (iii) deformation analysis from high resolution imagery.", "title": "" }, { "docid": "14360f8801fcff22b7a0059b322ebf9a", "text": "Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.", "title": "" } ]
[ { "docid": "4bc203b063d3e344b8537397de3456ea", "text": "Sustainability research faces many challenges as respective environmental, urban and regional contexts are experiencing rapid changes at an unprecedented spatial granularity level, which involves growing massive data and the need for spatial relationship detection at a faster pace. Spatial join is a fundamental method for making data more informative with respect to spatial relations. The dramatic growth of data volumes has led to increased focus on high-performance large-scale spatial join. In this paper, we present Spatial Join with Spark (SJS), a proposed high-performance algorithm, that uses a simple, but efficient, uniform spatial grid to partition datasets and joins the partitions with the built-in join transformation of Spark. SJS utilizes the distributed in-memory iterative computation of Spark, then introduces a calculation-evaluating model and in-memory spatial repartition technology, which optimize the initial partition by evaluating the calculation amount of local join algorithms without any disk access. We compare four in-memory spatial join algorithms in SJS for further performance improvement. Based on extensive experiments with real-world data, we conclude that SJS outperforms the Spark and MapReduce implementations of earlier spatial join approaches. This study demonstrates that it is promising to leverage high-performance computing for large-scale spatial join analysis. The availability of large-sized geo-referenced datasets along with the high-performance computing technology can raise great opportunities for sustainability research on whether and how these new trends in data and technology can be utilized to help detect the associated trends and patterns in the human-environment dynamics.", "title": "" }, { "docid": "088f4245f749feaf0cc88d9f374e17bf", "text": "Trajectory classification, i.e., model construction for predicting the class labels of moving objects based on their trajectories and other features, has many important, real-world applications. A number of methods have been reported in the literature, but due to using the shapes of whole trajectories for classification, they have limited classification capability when discriminative features appear at parts of trajectories or are not relevant to the shapes of trajectories. These situations are often observed in long trajectories spreading over large geographic areas. Since an essential task for effective classification is generating discriminative features, a feature generation framework TraClass for trajectory data is proposed in this paper, which generates a hierarchy of features by partitioning trajectories and exploring two types of clustering: (1) region-based and (2) trajectory-based. The former captures the higher-level region-based features without using movement patterns, whereas the latter captures the lower-level trajectory-based features using movement patterns. The proposed framework overcomes the limitations of the previous studies because trajectory partitioning makes discriminative parts of trajectories identifiable, and the two types of clustering collaborate to find features of both regions and sub-trajectories. Experimental results demonstrate that TraClass generates high-quality features and achieves high classification accuracy from real trajectory data.", "title": "" }, { "docid": "a8a51268e3e4dc3b8dd5102dafcb8f36", "text": "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "title": "" }, { "docid": "c4d204b8ceda86e9d8e4ca56214f0ba3", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "ecc7773c38429bf3a3701c38310f165e", "text": "In this paper, we present the first multi-body non-rigid structure-from-motion (SFM) method, which simultaneously reconstructs and segments multiple objects that are undergoing non-rigid deformation over time. Under our formulation, 3D trajectories for each non-rigid object can be well approximated with a sparse affine combination of other 3D trajectories from the same object. The resultant optimization is solved by the alternating direction method of multipliers (ADMM). We demonstrate the efficacy of the proposed method through extensive experiments on both synthetic and real data sequences. Our method outperforms other alternative methods, such as first clustering the 2D feature tracks to groups and then doing non-rigid reconstruction in each group or first conducting 3D reconstruction by using single subspace assumption and then clustering the 3D trajectories into groups.", "title": "" }, { "docid": "c1ae8ea2da982e5094fdd9816e249b53", "text": "Corporate Social Responsibility (CSR) reporting receives much attention nowadays. Communication with stakeholders is a part of assumed social responsibility, thus the quality of information disclosed in CSR reports has a significant impact on fulfilment of the responsibility. The authors use content analysis of selected CSR reports to describe and assess patterns and structure of information disclosed in them. CSR reports of Polish companies have similar structures at a very high level of analysis, but a more detailed study reveals much diversity in approaches to the report’s content. Even fairly similar companies may devote significantly different amounts of space to the same issue. The number of similar stakeholders varies irrespectively of the company’s size. Considerable diversity of reporting patterns results from the nature of CSR reporting, because it concerns highly entity-specific issues. Thus, such considerable diversity is not surprising. However, many initiatives and efforts are devoted to greater comparability of reporting, so a greater degree of uniformity can be expected. Similar conclusions may be drawn from integrated reports’ analysis, though a small sample reflects the relative novelty of this trend.", "title": "" }, { "docid": "cfe603bceefb9c0c4836feb2922523ff", "text": "Making ITS available on the World Wide Web (WWW) is a way to integrate the flexibility and intelligence of ITS with world-wide availability of WWW applications. This paper discusses the problems of developing WWW-available ITS and, in particular, the problem of porting existing ITS to a WWW platform. We present the system ELMART which is a WWW-based ITS to support learning programming in Lisp. ELM-ART demonstrates how several known ITS technologies can be implemented in WWW context. 1 ITS Technologies and WWW Context WWW opens new ways of learning for many people. However, most of the existing educational WWW applications use simplest solutions and are much more weak and restricted than existing 'on-site' educational systems and tools. In particular, most WWW educational systems do not use powerful ITS technologies. A promising direction of research is to port these technologies to a WWW platform, thus joining the flexibility and intelligence of ITS with world-wide availability of WWW applications. Most of traditional intelligent techniques applied in ITS can be roughly classified into three groups which we will name as technologies: curriculum sequencing, interactive problem solving support, and intelligent analysis of student solutions. All these technologies are aimed at supporting the \"intelligent\" duties of the human teacher which can not be supported by traditional non-intelligent tutoring systems. Curriculum sequencing and intelligent analysis of student solutions are the oldest and best-studied technologies in the domain of ITS. Most ITS developed during the first 10 years of ITS history belong to these groups. The technology of interactive problem solving support is a newer one, but it is more \"intelligent\" and supportive (it helps the student in the most difficult part of the learning process and provides the most valuable support for the teacher in the classroom). It is not surprising that it became a dominating technology during the last 15 years. The WWW context changes the attitudes to traditional ITS techniques [Brusilovsky, 1995]. For example, interactive problem solving support currently seems to be a less suitable technology for WWWbased ITS. Vice versa, the two older technologies seem to be very usable and helpful in the WWW context. Intelligent analysis of solutions needs only one interaction between browser and server for a complete solution. It can provide intelligent feedback and perform student modeling when interactive problem solving support is impossible. Curriculum sequencing becomes very important to guide the student through the hyperspace of available information. In addition to traditional ITS technologies, some of more recent (and much less used) ITS technologies become important. Two examples are adaptive hypermedia [Beaumont & Brusilovsky, 1995] and example-based problem solving [Weber, 1995]. This paper discusses the problems of developing WWW-based ITS and, in particular, the problem of porting existing ITS to a WWW platform. We present the system ELM-ART which is an ITS to support learning programming in Lisp. ELMART is developed on the base of the system ELM-PE [Weber & Möllenberg, 1994] specially to be used on WWW. The presentation is centered around intelligent features of ELM-ART (a number of interesting non-intelligent features of ELM-ART are described elsewhere [Schwarz, Brusilovsky & Weber, 1996]). The goal of the paper is to demonstrate how several known ITS technologies can be implemented on WWW and what has to be added when porting a traditional ITS to WWW.", "title": "" }, { "docid": "c75b7ad0faf841b7ec4ae7f91d236259", "text": "People have been shown to project lifelike attributes onto robots and to display behavior indicative of empathy in human-robot interaction. Our work explores the role of empathy by examining how humans respond to a simple robotic object when asked to strike it. We measure the effects of lifelike movement and stories on people's hesitation to strike the robot, and we evaluate the relationship between hesitation and people's trait empathy. Our results show that people with a certain type of high trait empathy (empathic concern) hesitate to strike the robots. We also find that high empathic concern and hesitation are more strongly related for robots with stories. This suggests that high trait empathy increases people's hesitation to strike a robot, and that stories may positively influence their empathic responses.", "title": "" }, { "docid": "1baaa67ff7b4d00d6f03ae908cf1ca71", "text": "Function approximation has been found in many applications. The radial basis function (RBF) network is one approach which has shown a great promise in this sort of problems because of its faster learning capacity. A traditional RBF network takes Gaussian functions as its basis functions and adopts the least-squares criterion as the objective function, However, it still suffers from two major problems. First, it is difficult to use Gaussian functions to approximate constant values. If a function has nearly constant values in some intervals, the RBF network will be found inefficient in approximating these values. Second, when the training patterns incur a large error, the network will interpolate these training patterns incorrectly. In order to cope with these problems, an RBF network is proposed in this paper which is based on sequences of sigmoidal functions and a robust objective function. The former replaces the Gaussian functions as the basis function of the network so that constant-valued functions can be approximated accurately by an RBF network, while the latter is used to restrain the influence of large errors. Compared with traditional RBF networks, the proposed network demonstrates the following advantages: (1) better capability of approximation to underlying functions; (2) faster learning speed; (3) better size of network; (4) high robustness to outliers.", "title": "" }, { "docid": "e667256e724286f268af45590e5d028e", "text": "Cloud computing has raised IT to newer limits by contribution the market environment data storage and volume with springy scalable computing processing power to match elastic demand supply, with reducing capital expenditure. Usually cloud computing services are delivered by a third party provider who owns the infrastructure. Cloud computing offers an innovative business model for organizations to adopt IT services without upfront investment. Security is one of the major issues which hamper the growth of cloud. Today, leading players, such as Amazon, Google, IBM, Microsoft and salesforce.com offer their cloud infrastructure for services.", "title": "" }, { "docid": "3f6f1d7059786d4804074cfb57367aa7", "text": "Context: Since the introduction of evidence-based software engineering in 2004, systematic literature review (SLR) has been increasingly used as a method for conducting secondary studies in software engineering. Two tertiary studies, published in 2009 and 2010, identified and analysed 54 SLRs published in journals and conferences in the period between 1st January 2004 and 30th June 2008. Objective: In this article, our goal was to extend and update the two previous tertiary studies to cover the period between 1st July 2008 and 31st December 2009. We analysed the quality, coverage of software engineering topics, and potential impact of published SLRs for education and practice. Method: We performed automatic and manual searches for SLRs published in journals and conference proceedings, analysed the relevant studies, and compared and integrated our findings with the two previous tertiary studies. Results: We found 67 new SLRs addressing 24 software engineering topics. Among these studies, 15 were considered relevant to the undergraduate educational curriculum, and 40 appeared of possible interest to practitioners. We found that the number of SLRs in software engineering is increasing, the overall quality of the studies is improving, and the number of researchers and research organisations worldwide that are conducting SLRs is also increasing and spreading. Conclusion: Our findings suggest that the software engineering research community is starting to adopt SLRs consistently as a research method. However, the majority of the SLRs did not evaluate the quality of primary studies and fail to provide guidelines for practitioners, thus decreasing their potential impact on software engineering practice. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9e87eea336b1cb98d858004ff2bbcf13", "text": "Anomaly detection is a popular problem in many fields. We investigate an anomaly detection method based on probability density function (PDF) of different status. The constructed PDF only require few training data based on Kullback–Leibler Divergence method and small signal assumption. The measurement matrix was deduced according to principal component analysis (PCA). And the statistical detection indicator was set up under iid Gaussian Noise background. The performance of the proposed anomaly detection method was tested with through wall human detection experiments. The results showed that the proposed method could detection human being for brick wall and gypsum wall, but had unremarkable results for concrete wall. & 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "53518256d6b4f3bb4e8dcf28a35f9284", "text": "Customers often evaluate products at brick-and-mortar stores to identify their “best fit” product but buy it for a lower price at a competing online retailer. This free-riding behavior by customers is referred to as “showrooming” and we show that this is detrimental to the profits of the brick-and-mortar stores. We first analyze price matching as a short-term strategy to counter showrooming. Since customers purchase from the store at lower than store posted price when they ask for price-matching, one would expect the price matching strategy to be less effective as the fraction of customers who seek the matching increases. However, our results show that with an increase in the fraction of customers who seek price matching, the stores profits initially decrease and then increase. While price-matching could be used even when customers do not exhibit showrooming behavior, we find that it is more effective when customers do showrooming. We then study exclusivity of product assortments as a long-term strategy to counter showrooming. This strategy can be implemented in two different ways. One, by arranging for exclusivity of known brands (e.g. Macy’s has such an arrangement with Tommy Hilfiger), or, two, through creation of store brands at the brick-and-mortar store (T.J.Maxx uses a large number of store brands). Our analysis suggests that implementing exclusivity through store brands is better than exclusivity through known brands when the product category has few digital attributes. However, when customers do not showroom, the known brand strategy dominates the store brand strategy.", "title": "" }, { "docid": "5dc4d740028b009f60c24d3107632aa7", "text": "Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme’s effectiveness in both real and simulated streaming environments.", "title": "" }, { "docid": "4de2536d5c56d6ade1b3eff97ac8037a", "text": "Received November 25, 1992; revised manuscript received April 14, 1993; accepted May 11, 1993 We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M 1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer-Rao lower bound on estimation-error variance for all but small values of target-toclutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phaseestimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatialcorrelation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class.", "title": "" }, { "docid": "9c8fefeb34cc1adc053b5918ea0c004d", "text": "Mezzo is a computer program designed that procedurally writes Romantic-Era style music in real-time to accompany computer games. Leitmotivs are associated with game characters and elements, and mapped into various musical forms. These forms are distinguished by different amounts of harmonic tension and formal regularity, which lets them musically convey various states of markedness which correspond to states in the game story. Because the program is not currently attached to any game or game engine, “virtual” gameplays were been used to explore the capabilities of the program; that is, videos of various game traces were used as proxy examples. For each game trace, Leitmotivs were input to be associated with characters and game elements, and a set of ‘cues’ was written, consisting of a set of time points at which a new set of game data would be passed to Mezzo to reflect the action of the game trace. Examples of music composed for one such game trace, a scene from Red Dead Redemption, are given to illustrate the various ways the program maps Leitmotivs into different levels of musical markedness that correspond with the game state. Introduction Mezzo is a computer program designed by the author that procedurally writes Romantic-Era-style music in real time to accompany computer games. It was motivated by the desire for game music to be as rich and expressive as that written for traditional media such as opera, ballet, or film, while still being procedurally generated, and thus able to adapt to a variety of dramatic situations. To do this, it models deep theories of musical form and semiotics in Classical and Romantic music. Characters and other important game elements like props and environmental features are given Leitmotivs, which are constantly rearranged and developed throughout gameplay in ways Copyright © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. that evoke the conditions and relationships of these elements. Story states that occur in a game are musically conveyed by employing or withholding normative musical features. This creates various states of markedness, a concept which is defined in semiotic terms as a valuation given to difference (Hatten 1994). An unmarked state or event is one that conveys normativity, while an unmarked one conveys deviation from or lack of normativity. A succession of musical sections that passes through varying states of markedness and unmarkedness, producing various trajectories of expectation and fulfillment, tension and release, correlates with the sequence of episodes that makes up a game story’s structure. Mezzo uses harmonic tension and formal regularity as its primary vehicles for musically conveying markedness; it is constantly adjusting the values of these features in order to express states of the game narrative. Motives are associated with characters, and markedness with game conditions. These two independent associations allow each coupling of a motive with a level of markedness to be interpreted as a pair of coordinates in a state space (a “semiotic square”), where various regions of the space correspond to different expressive musical qualities (Grabócz 2009). Certain patterns of melodic repetition combined with harmonic function became conventionalized in the Classical Era as normative forms, labeled the sentence, period, and sequence (Caplin 1998, Schoenberg 1969). These forms exist in the middleground of a musical work, each comprising one or several phrase repetitions and one or a small number of harmonic cadences. Each musical form has a normative structure, and various ways in which it can be deformed by introducing irregular amounts of phrase repetition to make the form asymmetrical. Mezzo’s expressive capability comes from the idea that there are different perceptible levels of formal irregularity that can be quantitatively measured, and that these different levels convey different levels of markedness. Musical Metacreation: Papers from the 2012 AIIDE Workshop AAAI Technical Report WS-12-16", "title": "" }, { "docid": "03daea46a533bcc91cc07071f7c2ca2a", "text": "This article describes the RMediation package,which offers various methods for building confidence intervals (CIs) for mediated effects. The mediated effect is the product of two regression coefficients. The distribution-of-the-product method has the best statistical performance of existing methods for building CIs for the mediated effect. RMediation produces CIs using methods based on the distribution of product, Monte Carlo simulations, and an asymptotic normal distribution. Furthermore, RMediation generates percentiles, quantiles, and the plot of the distribution and CI for the mediated effect. An existing program, called PRODCLIN, published in Behavior Research Methods, has been widely cited and used by researchers to build accurate CIs. PRODCLIN has several limitations: The program is somewhat cumbersome to access and yields no result for several cases. RMediation described herein is based on the widely available R software, includes several capabilities not available in PRODCLIN, and provides accurate results that PRODCLIN could not.", "title": "" }, { "docid": "5339bd241f053214673ead767476077d", "text": "----------------------------------------------------------------------ABSTRACT----------------------------------------------------------This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.", "title": "" }, { "docid": "b1a538752056e91fd5800911f36e6eb0", "text": "BACKGROUND\nThe current, so-called \"Millennial\" generation of learners is frequently characterized as having deep understanding of, and appreciation for, technology and social connectedness. This generation of learners has also been molded by a unique set of cultural influences that are essential for medical educators to consider in all aspects of their teaching, including curriculum design, student assessment, and interactions between faculty and learners.\n\n\nAIM\n The following tips outline an approach to facilitating learning of our current generation of medical trainees.\n\n\nMETHOD\n The method is based on the available literature and the authors' experiences with Millennial Learners in medical training.\n\n\nRESULTS\n The 12 tips provide detailed approaches and specific strategies for understanding and engaging Millennial Learners and enhancing their learning.\n\n\nCONCLUSION\n With an increased understanding of the characteristics of the current generation of medical trainees, faculty will be better able to facilitate learning and optimize interactions with Millennial Learners.", "title": "" }, { "docid": "ef02508d3d05cdda0b1b39b53f3820ec", "text": "In natural language generation, a meaning representation of some kind is successively transformed into a sentence or a text. Naturally, a central subtask of this problem is the choice of words, orlexicalization. In this paper, we propose four major issues that determine how a generator tackles lexicalization, and survey the contributions that researchers have made to them. Open problems are identified, and a possible direction for future research is sketched.", "title": "" } ]
scidocsrr
a55409faf7857bf5aed68c886c059823
CBE: Corpus-based of emotion for emotion detection in text document
[ { "docid": "ad6ae9f8280c0d466b9db2f47fea6bcc", "text": "I examine what would be necessary to move part-of-speech tagging performance from its current level of about 97.3% token accuracy (56% sentence accuracy) to close to 100% accuracy. I suggest that it must still be possible to greatly increase tagging performance and examine some useful improvements that have recently been made to the Stanford Part-of-Speech Tagger. However, an error analysis of some of the remaining errors suggests that there is limited further mileage to be had either from better machine learning or better features in a discriminative sequence classifier. The prospects for further gains from semisupervised learning also seem quite limited. Rather, I suggest and begin to demonstrate that the largest opportunity for further progress comes from improving the taxonomic basis of the linguistic resources from which taggers are trained. That is, from improved descriptive linguistics. However, I conclude by suggesting that there are also limits to this process. The status of some words may not be able to be adequately captured by assigning them to one of a small number of categories. While conventions can be used in such cases to improve tagging consistency, they lack a strong linguistic basis. 1 Isn’t Part-of-Speech Tagging a Solved Task? At first glance, current part-of-speech taggers work rapidly and reliably, with per-token accuracies of slightly over 97% [1–4]. Looked at more carefully, the story is not quite so rosy. This evaluation measure is easy both because it is measured per-token and because you get points for every punctuation mark and other tokens that are not ambiguous. It is perhaps more realistic to look at the rate of getting whole sentences right, since a single bad mistake in a sentence can greatly throw off the usefulness of a tagger to downstream tasks such as dependency parsing. Current good taggers have sentence accuracies around 55– 57%, which is a much more modest score. Accuracies also drop markedly when there are differences in topic, epoch, or writing style between the training and operational data. Still, the perception has been that same-epoch-and-domain part-of-speech tagging is a solved problem, and its accuracy cannot really be pushed higher. I think it is a common shared meme in at least the U.S. computational linguistics community that interannotator agreement or the limit of human consistency on part-of-speech tagging is 97%. As various authors have noted, e.g., [5], the second wave of machine learning part-of-speech taggers, which began with the work of Collins [6] and includes the other taggers cited above, routinely deliver accuracies a little above this level of 97%, when tagging material from the same source and epoch on which they were trained. This has been achieved by good modern discriminative machine learning methods, coupled with careful tuning of the feature set and sometimes classifier combination or semi-supervised learning methods. Viewed by this standard, these taggers now clearly exceed human performance on the task. Justifiably, considerable attention has moved to other concerns, such as getting part-of speech (POS) taggers to work well in more informal domains, in adaptation scenarios, and within reasonable speed and memory limits. What is the source of the belief that 97% is the limit of human consistency for part-of-speech tagging? It is easy to test for human tagging reliability: one just makes multiple measurements and sees how consistent the results are. I believe the value comes from the README.pos file in the tagged directory of early releases of the Penn Treebank. It suggests that the “estimated error rate for the POS tags is about 3%”. If one delves deeper, it seems like this 97% agreement number could actually be on the high side. In the journal article on the Penn Treebank [7], there is considerable detail about annotation, and in particular there is description of an early experiment on human POS tag annotation of parts of the Brown Corpus. Here it was found that if two annotators tagged for POS, the interannotator disagreement rate was actually 7.2%. If this was changed to a task of correcting the output of an automatic tagger (as was done for the actual Penn Treebank), then the disagreement rate dropped to 4.1%, and to 3.5% once one difficult text is excluded. Some of the agreement is then presumably both humans adopting the conventions of the automatic POS tagger rather than true human agreement, a topic to which I return later. If this is the best that humans can give us, the performance of taggers is clearly at or above its limit. But this seems surprising – anyone who has looked for a while at tagger output knows that while taggers are quite good, they regularly make egregious errors. Similarly, examining portions of the Penn Treebank by hand, it is just very obvious that there are lots of errors that are just mistakes rather than representing uncertainties or difficulties in the task. Table 1 shows a few tagging errors from the beginning of section 02 of the training data. These are all cases where I think there is no doubt about what the correct tag should be, but that nevertheless the annotator failed to assign it. It seems 1 This text appears up through LDC95T7 Treebank release 2; the statement no longer appears in the much shorter README included in the current LDC99T42 Treebank release 3). This error rate is also mentioned in [7, pp. 327–8]. 2 My informal impression is that the accuracy of sections 00 and 01 is considerably worse, perhaps reflecting a “burn in” process on the part of the annotators. I think it is in part for this reason that parsers have been conventionally trained on sections 02–21 of the Penn Treebank. But for POS tagging, most work has adopted the splits introduced by [6], which include sections 00 and 01 in the training data. clear that the inter-annotator agreement of humans depends on many factors, including their aptitude for the task, how much they are paying attention, how much guidance they are given and how much of the guidance they are able to remember. Indeed, Marcus et al. [7, p. 328] express the hope that the POS error rate can be reduced to 1% by getting corrections from multiple annotators, adjudicating disagreements, and using a specially retrained tagger. However, unfortunately, this work never took place. But using the tools developed over the last two decades given the existence of the Penn Treebank, we are now in a much better position to do this, using semi-automated methods, as I discuss below. Table 1. Examples of errors in Penn Treebank assigned parts-of-speech, from section", "title": "" }, { "docid": "28b2bbcfb8960ff40f2fe456a5b00729", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" }, { "docid": "9625b24acc9c0de66c65b0ae843b7dad", "text": "SenticNet is currently one of the most comprehensive freely available semantic resources for opinion mining. However, it only provides numerical polarity scores, while more detailed sentiment-related information for its concepts is often desirable. Another important resource for opinion mining and sentiment analysis is WordNet-Affect, which in turn lacks quantitative information. We report a work on automatically merging these two resources by assigning emotion labels to more than 2700 concepts.", "title": "" } ]
[ { "docid": "ac3ed36f4253525ff54bf4b0931479fc", "text": "This paper presents a design for a high-efficiency power amplifier with an output power of more than 100W, and an ultra-broad bandwidth from 10 to 500MHz. The amplifier has a 4-way push-pull configuration using Guanella's 1∶1 transmission line transformer. A negative feedback network is adopted to make the power gain flat enough over the operating bandwidth. The implemented power amplifier exhibits a power gain of 29.2±1.8dB from 10 to 500MHz band with its power-added efficiency (PAE) being greater than 43%, and the second-and third-harmonic distortions are below −29dBc and −9.78dBc, respectively, at an output power of 100W over the entire frequency band.", "title": "" }, { "docid": "6c8151eee3fcfaec7da724c2a6899e8f", "text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.", "title": "" }, { "docid": "feca14524ff389c59a4d6f79954f26e3", "text": "Zero shot learning (ZSL) is about being able to recognize gesture classes that were never seen before. This type of recognition involves the understanding that the presented gesture is a new form of expression from those observed so far, and yet carries embedded information universal to all the other gestures (also referred as context). As part of the same problem, it is required to determine what action/command this new gesture conveys, in order to react to the command autonomously. Research in this area may shed light to areas where ZSL occurs, such as spontaneous gestures. People perform gestures that may be new to the observer. This occurs when the gesturer is learning, solving a problem or acquiring a new language. The ability of having a machine recognizing spontaneous gesturing, in the same manner as humans do, would enable more fluent human-machine interaction. In this paper, we describe a new paradigm for ZSL based on adaptive learning, where it is possible to determine the amount of transfer learning carried out by the algorithm and how much knowledge is acquired from a new gesture observation. Another contribution is a procedure to determine what are the best semantic descriptors for a given command and how to use those as part of the ZSL approach proposed.", "title": "" }, { "docid": "7084fd27fcb249eff69e1b21f32abd0a", "text": "I review briefly different aspects of the MOND paradigm, with emphasis on phenomenology, epitomized here by many MOND laws of galactic motion–analogous to Kepler's laws of planetary motion. I then comment on the possible roots of MOND in cosmology, possibly the deepest and most far reaching aspect of MOND. This is followed by a succinct account of existing underlying theories. I also reflect on the implications of MOND's successes for the dark matter (DM) paradigm: MOND predictions imply that baryons alone accurately determine the full field of each and every individual galactic object. This conflicts with the expectations in the DM paradigm because of the haphazard formation and evolution of galactic objects and the very different influences that baryons and DM are subject to during the evolution, as evidenced, e.g., by the very small baryon-to-DM fraction in galaxies (compared with the cosmic value). All this should disabuse DM advocates of the thought that DM will someday be able to reproduce MOND: it is inconceivable that the modicum of baryons left over in galaxies can be made to determine everything if a much heavier DM component is present.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "b72bc9ee1c32ec3d268abd1d3e51db25", "text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.", "title": "" }, { "docid": "ad9f074e86a1eea6985f8e9ebf115078", "text": "Podosomes are highly dynamic actin-rich adhesion structures in cells of myeloid lineage and some transformed cells. Unlike transformed mesenchymal cell types, podosomes are the sole adhesion structure in macrophage and thus mediate all contact with adhesion substrate, including movement through complex tissues for immune surveillance. The existence of podosomes in inflammatory macrophages and transformed cell types suggest an important role in tissue invasion. The proteome, assembly, and maintenance of podosomes are emerging, but remain incompletely defined. Previously, we reported a formin homology sequence and actin assembly activity in association with macrophage beta-3 integrin. In this study we demonstrate by quantitative reverse transcriptase polymerase chain reaction and Western blotting that the formin FRL1 is specifically upregulated during monocyte differentiation to macrophages. We show that the formin FRL1 localizes to the actin-rich cores of primary macrophage podosomes. FRL1 co-precipitates with beta-3 integrin and both fixed and live cell fluorescence microscopy show that endogenous and overexpressed FRL1 selectively localize to macrophage podosomes. Targeted disruption of FRL1 by siRNA results in reduced cell adhesion and disruption of podosome dynamics. Our data suggest that FRL1 is responsible for modifying actin at the macrophage podosome and may be involved in actin cytoskeleton dynamics during adhesion and migration within tissues.", "title": "" }, { "docid": "b98585e7ed4b34afb72f81aeae2ebdcc", "text": "The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.", "title": "" }, { "docid": "89cc631db97607dbb45c8b956e7dee2a", "text": "Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands.", "title": "" }, { "docid": "678b90e0a7fdc1166928ff952b603f29", "text": "Semantic search promises to produce precise answers to user queries by taking advantage of the availability of explicit semantics of information in the context of the semantic web. Existing tools have been primarily designed to enhance the performance of traditional search technologies but with little support for naive users, i.e., ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by hiding the complexity of semantic search from end users and making it easy to use and effective. In contrast with existing semantic-based keyword search engines which typically compromise their capability of handling complex user queries in order to overcome the problem of knowledge overhead, SemSearch not only overcomes the problem of knowledge overhead but also supports complex queries. Further, SemSearch provides comprehensive means to produce precise answers that on the one hand satisfy user queries and on the other hand are self-explanatory and understandable by end users. A prototype of the search engine has been implemented and applied in the semantic web portal of our lab. An initial evaluation shows promising results.", "title": "" }, { "docid": "c171254eae86ce30c475c4355ed8879f", "text": "The rapid growth of connected things across the globe has been brought about by the deployment of the Internet of things (IoTs) at home, in organizations and industries. The innovation of smart things is envisioned through various protocols, but the most prevalent protocols are pub-sub protocols such as Message Queue Telemetry Transport (MQTT) and Advanced Message Queuing Protocol (AMQP). An emerging paradigm of communication architecture for IoTs support is Fog computing in which events are processed near to the place they occur for efficient and fast response time. One of the major concerns in the adoption of Fog computing based publishsubscribe protocols for the Internet of things is the lack of security mechanisms because the existing security protocols such as SSL/TSL have a large overhead of computations, storage and communications. To address these issues, we propose a secure, Fog computing based publish-subscribe lightweight protocol using Elliptic Curve Cryptography (ECC) for the Internet of Things. We present analytical proofs and results for resource efficient security, comparing to the existing protocols of traditional Internet.", "title": "" }, { "docid": "b783e3a8b9aaec7114603bafffcb5bfd", "text": "Acknowledgements This paper has benefited from conversations and collaborations with colleagues, including most notably Stefan Dercon, Cheryl Doss, and Chris Udry. None of them has read this manuscript, however, and they are not responsible for the views expressed here. Steve Wiggins provided critical comments on the first draft of the document and persuaded me to rethink a number of points. The aim of the Natural Resources Group is to build partnerships, capacity and wise decision-making for fair and sustainable use of natural resources. Our priority in pursuing this purpose is on local control and management of natural resources and other ecosystems. The Institute of Development Studies (IDS) is a leading global Institution for international development research, teaching and learning, and impact and communications, based at the University of Sussex. Its vision is a world in which poverty does not exist, social justice prevails and sustainable economic growth is focused on improving human wellbeing. The Overseas Development Institute (ODI) is a leading independent think tank on international development and humanitarian issues. Its mission is to inspire and inform policy and practice which lead to the reduction of poverty, the alleviation of suffering and the achievement of sustainable livelihoods. Smallholder agriculture has long served as the dominant economic activity for people in sub-Saharan Africa, and it will remain enormously important for the foreseeable future. But the size of the sector does not necessarily imply that investments in the smallholder sector will yield high social benefits in comparison to other possible uses of development resources. Large changes could potentially affect the viability of smallholder systems, emanating from shifts in technology, markets, climate and the global environment. The priorities for development policy will vary across and within countries due to the highly heterogeneous nature of the smallholder sector.", "title": "" }, { "docid": "815feed9cce2344872c50da6ffb77093", "text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.", "title": "" }, { "docid": "5e8e39cb778e86b24d6ceee6419dd333", "text": "The nature of healthcare processes in a multidisciplinary hospital is inherently complex. In this paper, we identify particular problems of modeling healthcare processes with the de-facto standard process modeling language BPMN. We discuss all possibilities of BPMN adressing these problems. Where plain BPMN fails to produce nice and easily comprehensible results, we propose a new approach: Encorporating role information in process models using the color attribute of tasks complementary to the usage of lanes.", "title": "" }, { "docid": "5b393167f2b55df1d9a889d969fb187d", "text": "We propose speaker gender recognition achieved by using score level fusion by AdaBoost. Soft biometrics has been focused on because recognition by fusing biométrie systems and soft biométrie traits may improve the accuracy of recognition and decrease the time for this. Gender recognition is important for speaker recognition and can provide important information to speaker recognition systems. Mel-frequency cepstral coefficient (MFCC) and pitch contain gender information. MFCCs and pitch are often used for gender recognition. Consequently, identification accuracy may be improved by using both MFCC and pitch. We focused on the score level fusion to accomplish speaker gender recognition. We propose speaker gender recognition based on the score level fusion using AdaBoost because it can control the recognition accuracy and recognition time. We experimentally demonstrate the proposed method's effectiveness through simulation results and show that it achieves greater accuracy than that obtained by using single information from voice.", "title": "" }, { "docid": "f226d14c95fca32dc55b554619ec8691", "text": "Motivation to learn is affected by a student’s self-efficacy, goal orientation, locus of control and perceived task difficulty. In the classroom, teachers know how to motivate their students and how to exploit this knowledge to adapt or optimize their instruction when a student shows signs of demotivation. In on-line learning environments it is much more difficult to assess the level of motivation of the student and to have adaptive intervention strategies and rules of application to help prevent attrition. We have developed MotSaRT – a motivational strategies recommender tool to support on-line teachers in motivating learners. The design is informed by the Social Cognitive Theory constructs outlined above and a survey on motivation intervention strategies carried out with sixty on-line teachers. The survey results were analysed using a data mining algorithm (J48 decision trees) which resulted in a set of decision rules for recommending motivational strategies. The recommender tool, MotSaRT, has been developed based on these decision rules. Its functionality enables the teacher to specify the learner’s motivation profile. MotSaRT then recommends the most likely intervention strategies to increase motivation. A pilot study is currently being carried out using the MotSaRT tool.", "title": "" }, { "docid": "1ec1fc8aabb8f7880bfa970ccbc45913", "text": "Several isolates of Gram-positive, acidophilic, moderately thermophilic, ferrous-iron- and mineral-sulphide-oxidizing bacteria were examined to establish unequivocally the characteristics of Sulfobacillus-like bacteria. Two species were evident: Sulfobacillus thermosulfidooxidans with 48-50 mol% G+C and Sulfobacillus acidophilus sp. nov. with 55-57 mol% G+C. Both species grew autotrophically and mixotrophically on ferrous iron, on elemental sulphur in the presence of yeast extract, and heterotrophically on yeast extract. Autotrophic growth on sulphur was consistently obtained only with S. acidophilus.", "title": "" }, { "docid": "a1c1b4193d30007f820e86c424a96843", "text": "Water leakage is a significant problem in both developing and developed countries causing water loss in water-distribution systems. Leakage causes economic loss in the form of wastage of water, damage to pipe networks and foundations of roads and buildings, and also poses risk to public health due to water contamination. The lost or unaccounted amount of water is typically 20-30 percent of production. Some older systems may lose even up to 50 percent. The water pipe networks in houses as well as public places are generally concealed and hence detecting water leakage in the initial stages before an upcoming damage is difficult. The existing technologies for detecting leakage have various limitations such as efficiency being dependent on size, material and depth of pipes, need for manual intervention, dependency on weather and surface conditions and effect of water pressure. In this paper, we propose an automated water leakage detection system using a wireless sensor network created along the distribution pipes based on thermal (IR) imaging. Thermal imaging has the capability to work in low lighting or dark conditions and helps in capturing the contrast between hot and cold areas created due to water leakage. A network of low cost, low power thermal imaging sensors each having its own processing and Radiofrequency (RF) Transreceiver units and operating independent of pipe, weather or surface conditions is proposed in this paper. A central database is updated on a real-time basis, enabling very early leakage detection and initiating subsequent action to address the problem. An example system evaluation is performed and results highlighting the power and cost impact of the sensor network system are presented.", "title": "" }, { "docid": "b0afcee1ac7ce691f60302dd8298b633", "text": "With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches for Aspect Based Sentiment Analysis obtain good results for the domain/language they are trained on, but having manually labelled data for training supervised systems for all domains and languages is usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classification, aspectterms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic devices).", "title": "" }, { "docid": "eb59f239621dde59a13854c5e6fa9f54", "text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: Pierre.Dupont@uclouvain.be Phone: +32 10 47 91 14 Fax: +32 10 45 03 45", "title": "" } ]
scidocsrr
d95a1534e5c7727f4a0e2f0556401de5
A Comparative Review of Dimension Reduction Methods in Approximate Bayesian Computation
[ { "docid": "8f4a0c6252586fa01133f9f9f257ec87", "text": "The pls package implements principal component regression (PCR) and partial least squares regression (PLSR) in R (R Development Core Team 2006b), and is freely available from the Comprehensive R Archive Network (CRAN), licensed under the GNU General Public License (GPL). The user interface is modelled after the traditional formula interface, as exemplified by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a flexible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coefficients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.", "title": "" } ]
[ { "docid": "f40966d0a65836faa3755c6413b53eb0", "text": "Pathogenic Escherichia coli is a major cause of diarrhea in postweaning piglets. Virulence genes, antimicrobial resistance, integrons, and genetic diversity of E. coli were determined in 100 rectal swab samples collected from postweaning piglets with and without diarrhea (5-7 weeks of age) in a farm in a central province of Thailand. Of 246 E. coli isolates, 141 were positive for at least one virulence gene determined by multiplex PCR, the most commonly found from both groups of piglets being astA, while lt, F4, F18, and F41 only from diarrheal piglets. More than 80% of E. coli isolates were resistant to 7 of 12 antimicrobial agents. One hundred and fifty-seven E. coli isolates carried class 1 and/or 2 integron(s). Integron-positive isolates are significantly associated with strains resistant to kanamycin, oxytetracycline, streptomycin, sulfamethoxazole/trimethoprim and tetracycline. Phylogenetic analysis by multilocus sequence typing revealed that the 31 representative E. coli isolates were genetically diverse, especially those from diarrheal piglets suggesting that E. coli from postweaning piglets were not derived from a single clone. Sequence type (ST)10, ST641 and ST1114 were most commonly found in both groups of piglets. No correlation was observed among ST, presence of integron and antimicrobial resistance. The study suggests that swines in a farm could be a reservoir and possible spread of diarrheagenic E. coli including strains with antimicrobial resistance genes.", "title": "" }, { "docid": "cb9d35d577afc17afcca66c16ea2f554", "text": "In this paper, we propose a new domain adaptation technique for neural machine translation called cost weighting, which is appropriate for adaptation scenarios in which a small in-domain data set and a large general-domain data set are available. Cost weighting incorporates a domain classifier into the neural machine translation training algorithm, using features derived from the encoder representation in order to distinguish in-domain from out-of-domain data. Classifier probabilities are used to weight sentences according to their domain similarity when updating the parameters of the neural translation model. We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting. Experiments on two large-data tasks show that both the traditional techniques and our novel proposal lead to significant gains, with cost weighting outperforming the traditional methods.", "title": "" }, { "docid": "5ca765f0ddc5b22ddd88cb41f5c2fde4", "text": "The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by feedback loops. The engine often describes the adaptation by runtime models representing the adaptable software and by activities such as analysis and planning that use these models. To systematically address the interplay between runtime models and adaptation activities, runtime megamodels have been proposed. A runtime megamodel is a specific model capturing runtime models and adaptation activities. In this article, we go one step further and present an executable modeling language for ExecUtable RuntimE MegAmodels (EUREMA) that eases the development of adaptation engines by following a model-driven engineering approach. We provide a domain-specific modeling language and a runtime interpreter for adaptation engines, in particular feedback loops. Megamodels are kept alive at runtime and by interpreting them, they are directly executed to run feedback loops. Additionally, they can be dynamically adjusted to adapt feedback loops. Thus, EUREMA supports development by making feedback loops explicit at a higher level of abstraction and it enables solutions where multiple feedback loops interact or operate on top of each other and self-adaptation co-exists with offline adaptation for evolution.", "title": "" }, { "docid": "70ea3e32d4928e7fd174b417ec8b6d0e", "text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.", "title": "" }, { "docid": "e4a3a52e297d268288aba404f0d24544", "text": "The world is facing several challenges that must be dealt within the coming years such as efficient energy management, need for economic growth, security and quality of life of its habitants. The increasing concentration of the world population into urban areas puts the cities in the center of the preoccupations and makes them important actors for the world's sustainable development strategy. ICT has a substantial potential to help cities to respond to the growing demands of more efficient, sustainable, and increased quality of life in the cities, thus to make them \"smarter\". Smartness is directly proportional with the \"awareness\". Cyber-physical systems can extract the awareness information from the physical world and process this information in the cyber-world. Thus, a holistic integrated approach, from the physical to the cyber-world is necessary for a successful and sustainable smart city outcome. This paper introduces important research challenges that we believe will be important in the coming years and provides guidelines and recommendations to achieve self-aware smart city objectives.", "title": "" }, { "docid": "9d0a383122a7aa73053cededb64b418d", "text": "With the explosive growth of Internet of Things devices and massive data produced at the edge of the network, the traditional centralized cloud computing model has come to a bottleneck due to the bandwidth limitation and resources constraint. Therefore, edge computing, which enables storing and processing data at the edge of the network, has emerged as a promising technology in recent years. However, the unique features of edge computing, such as content perception, real-time computing, and parallel processing, has also introduced several new challenges in the field of data security and privacy-preserving, which are also the key concerns of the other prevailing computing paradigms, such as cloud computing, mobile cloud computing, and fog computing. Despites its importance, there still lacks a survey on the recent research advance of data security and privacy-preserving in the field of edge computing. In this paper, we present a comprehensive analysis of the data security and privacy threats, protection technologies, and countermeasures inherent in edge computing. Specifically, we first make an overview of edge computing, including forming factors, definition, architecture, and several essential applications. Next, a detailed analysis of data security and privacy requirements, challenges, and mechanisms in edge computing are presented. Then, the cryptography-based technologies for solving data security and privacy issues are summarized. The state-of-the-art data security and privacy solutions in edge-related paradigms are also surveyed. Finally, we propose several open research directions of data security in the field of edge computing.", "title": "" }, { "docid": "2ac5b08573e8b243ac0eb5b6ab10c73d", "text": "The use of virtual reality (VR) display systems has escalated over the last 5 yr and may have consequences for those working within vision research. This paper provides a brief review of the literature pertaining to the representation of depth in stereoscopic VR displays. Specific attention is paid to the response of the accommodation system with its cross-links to vergence eye movements, and to the spatial errors that arise when portraying three-dimensional space on a two-dimensional window. It is suggested that these factors prevent large depth intervals of three-dimensional visual space being rendered with integrity through dual two-dimensional arrays.", "title": "" }, { "docid": "b5eafe60989c0c4265fa910c79bbce41", "text": "Little research has addressed IT professionals’ script debugging strategies, or considered whether there may be gender differences in these strategies. What strategies do male and female scripters use and what kinds of mechanisms do they employ to successfully fix bugs? Also, are scripters’ debugging strategies similar to or different from those of spreadsheet debuggers? Without the answers to these questions, tool designers do not have a target to aim at for supporting how male and female scripters want to go about debugging. We conducted a think-aloud study to bridge this gap. Our results include (1) a generalized understanding of debugging strategies used by spreadsheet users and scripters, (2) identification of the multiple mechanisms scripters employed to carry out the strategies, and (3) detailed examples of how these debugging strategies were employed by males and females to successfully fix bugs.", "title": "" }, { "docid": "2b3929da96949056bc473e8da947cebe", "text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.", "title": "" }, { "docid": "3d007291b5ca2220c15e6eee72b94a76", "text": "While the number of knowledge bases in the Semantic Web increases, the maintenance and creation of ontology schemata still remain a challenge. In particular creating class expressions constitutes one of the more demanding aspects of ontology engineering. In this article we describe how to adapt a semi-automatic method for learning OWL class expressions to the ontology engineering use case. Specifically, we describe how to extend an existing learning algorithm for the class learning problem. We perform rigorous performance optimization of the underlying algorithms for providing instant suggestions to the user. We also present two plugins, which use the algorithm, for the popular Protégé and OntoWiki ontology editors and provide a preliminary evaluation on real ontologies.", "title": "" }, { "docid": "babad10231d83130eae6241feb5314cf", "text": "A solution to a binary constraint satisfaction problem is a set of discrete values, one in each of a given set of domains, subject to constraints that allow only prescribed pairs of values in specified pairs of domains. Solutions are sought by backtrack search interleaved with a process that removes from domains those values that are currently inconsistent with provisional choices already made in the course of search. For each value in a given domain, a bit-vector shows which values in another domain are or are not permitted in a solution. Bit-vector representation of constraints allows bit-parallel, therefore fast, operations for editing domains during search. This article revises and updates bit-vector algorithms published in the 1970's, and introduces focus search, which is a new bit-vector algorithm relying more on search and less on domain-editing than previous algorithms. Focus search is competitive within a limited family of constraint satisfaction problems.\n Determination of subgraph isomorphism is a specialized binary constraint satisfaction problem for which bit-vector algorithms have been widely used since the 1980s, particularly for matching molecular structures. This article very substantially updates the author's 1976 subgraph isomorphism algorithm, and reports experimental results with random and real-life data.", "title": "" }, { "docid": "fca617ebfc6dad2db881cdaba9ffe154", "text": "In this paper, we present ‘Tuskbot’, a wheel-based robot with a novel structure called ‘Tusk’, passive and protruded elements in the front part of the robot, which can create an angle-of-attack when it climbs stairs. The robot can easily overcome stairs with the help of Tusk, which does not require additional active mechanisms. We formulated a simplified mathematical model of the structure based on the geometrical relationship between the wheels and the stairs in each phase during the stair-climb. To test the model and Tusk structure, we calculated the length of each link and the angle of Tusk from the dimension of stair and radius of wheels, and built the robot accordingly. The results demonstrate the validity of the model and the structure.", "title": "" }, { "docid": "efddb60143c59ee9e459e1048a09787c", "text": "The aim of this paper is to determine the possibilities of using commercial off the shelf FPGA based Software Defined Radio Systems to develop a system capable of detecting and locating small drones.", "title": "" }, { "docid": "d9df98fbd7281b67347df0f2643323fa", "text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.", "title": "" }, { "docid": "baaf5616e7851dde1162fff27ba9475a", "text": "This paper presents the results of a detailed gross and histologic examination of the eyes and brain in a case of synophthalmia as well as radiographic studies of the skull. Data on 34 other cases of synophthalmia-cyclopia on file in the Registry of Ophthalmic Pathology, Armed Forces Institute of Pathology (AFIP), are also summarized. In synophthalmia-cyclopia, the median ocular structure is symmetrical and displays two gradients of ocular organization: (1) The anterior segments are usually paired and comparatively well differentiated, whereas, posteriorly, a single, more disorganized compartment is present; (2) the lateral components show more advanced differentiation than the medial. There is invariably a single optic nerve and no chiasm. The brain, the nose, and the bones and soft tissues of the upper facial region, while malformed, are symmetrical and show a similar gradient of organization in that the lateral parts are better developed than the medial. The constant occurrence of a profound cerebral malformation along with the ocular deformity suggests a widespread abnormality of the anterior neural plate from which both the eyes and brain emerge. The data indicate that the defect occurs at or before the time of closure of the neural folds when the neural plate is still labile. The probability of fusion of two ocular anlagen in synophthalmia-cyclopia seems less likely than the emergence of incomplete bicentricity in the ocular fields of the neural plate during the period when the eye primordia are initially induced by the mesoderm. Embryologic studies in experimental animals provide insight into possible mechanisms by which inperfect eye and brain primordia are established. Nonetheless, once established, the eye and brain primordia in synophthalmia-cyclopia are capable of and do complete each step of the usual sequence of ocular and cerebral organogenesis in an orderly manner. The resulting eyes and brain are organogenetically incomplete but histogenetically mature. Ancillary facial and osseous defects result from the faulty migration of neural crests and development of embryonic facial processes secondary to the abnormal ocular and cerebral rudiments. The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Department of the Army or the Department of Defense. Presented in part at the annual meeting of the Association for Research in Vision and Ophthalmology in Sarasota, Florida, April 28, 1975, and at the biennial meeting of the AFIP-Ophthalmic Pathology Alumni Meeting in Washington, D.C., June 18, 1976.", "title": "" }, { "docid": "8b5ea4603ac53a837c3e81dfe953a706", "text": "Many teaching practices implicitly assume that conceptual knowledge can be abstracted from the situations in which it is learned and used. This article argues that this assumption inevitably limits the effectiveness of such practices. Drawing on recent research into cognition as it is manifest in everyday activity, the authors argue that knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used. They discuss how this view of knowledge affects our understanding of learning, and they note that conventional schooling too often ignores the influence of school culture on what is learned in school. As an alternative to conventional practices, they propose cognitive apprenticeship (Collins, Brown, Newman, in press), which honors the situated nature of knowledge. They examine two examples of mathematics instruction that exhibit certain key features of this approach to teaching. The breach between learning and use, which is captured by the folk categories \"know what\" and \"know how,\" may well be a product of the structure and practices of our education system. Many methods of didactic education assume a separation between knowing and doing, treating knowledge as an integral, self-sufficient substance, theoretically independent of the situations in which it is learned and used. The primary concern of schools often seems to be the transfer of this substance, which comprises abstract, decontextualized formal concepts. The activity and context in which learning takes place are thus regarded as merely ancillary to learning---pedagogically useful, of course, but fundamentally distinct and even neutral with respect to what is learned. Recent investigations of learning, however, challenge this separating of what is learned from how it is learned and used. The activity in which knowledge is developed and deployed, it is now argued, is not separable from or ancillary to learning and cognition. Nor is it neutral. Rather, it is an integral part of what is learned. Situations might be said to co-produce knowledge through activity. Learning and cognition, it is now possible to argue, are fundamentally situated. In this paper, we try to explain in a deliberately speculative way, why activity and situations are integral to cognition and learning, and how different ideas of what is appropriate learning activity produce very different results. We suggest that, by ignoring the situated nature of cognition, education defeats its own goal of providing useable, robust knowledge. And conversely, we argue that approaches such as cognitive apprenticeship (Collins, Brown, & Newman, in press) that embed learning in activity and make deliberate use of the social and physical context are more in line with the understanding of learning and cognition that is emerging from research. Situated Knowledge and Learning Miller and Gildea's (1987) work on vocabulary teaching has shown how the assumption that knowing and doing can be separated leads to a teaching method that ignores the way situations structure cognition. Their work has described how children are taught words from dictionary definitions and a few exemplary sentences, and they have compared this method with the way vocabulary is normally learned outside school. People generally learn words in the context of ordinary communication. This process is startlingly fast and successful. Miller and Gildea note that by listening, talking, and reading, the average 17-year-old has learned vocabulary at a rate of 5,000 words per year (13 per day) for over 16 years. By contrast, learning words from abstract definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way:definitions and sentences taken out of the context of normal use, the way vocabulary has often been taught, is slow and generally unsuccessful. There is barely enough classroom time to teach more than 100 to 200 words per year. Moreover, much of what is taught turns out to be almost useless in practice. They give the following examples of students' uses of vocabulary acquired this way: \"Me and my parents correlate, because without them I wouldn't be here.\" \"I was meticulous about falling off the cliff.\" \"Mrs. Morrow stimulated the soup.\" Given the method, such mistakes seem unavoidable. Teaching from dictionaries assumes that definitions and exemplary sentences are self-contained \"pieces\" of knowledge. But words and sentences are not islands, entire unto themselves. Language use would involve an unremitting confrontation with ambiguity, polysemy, nuance, metaphor, and so forth were these not resolved with the extralinguistic help that the context of an utterance provides (Nunberg, 1978). Prominent among the intricacies of language that depend on extralinguistic help are indexical words --words like I, here, now, next, tomorrow, afterwards, this. Indexical terms are those that \"index\"or more plainly point to a part of the situation in which communication is being conducted. They are not merely contextsensitive; they are completely context-dependent. Words like I or now, for instance, can only be interpreted in the 'context of their use. Surprisingly, all words can be seen as at least partially indexical (Barwise & Perry, 1983). Experienced readers implicitly understand that words are situated. They, therefore, ask for the rest of the sentence or the context before committing themselves to an interpretation of a word. And they go to dictionaries with situated examples of usage in mind. The situation as well as the dictionary supports the interpretation. But the students who produced the sentences listed had no support from a normal communicative situation. In tasks like theirs, dictionary definitions are assumed to be self-sufficient. The extralinguistic props that would structure, constrain, and ultimately allow interpretation in normal communication are ignored. Learning from dictionaries, like any method that tries to teach abstract concepts independently of authentic situations, overlooks the way understanding is developed through continued, situated use. This development, which involves complex social negotiations, does not crystallize into a categorical definition. Because it is dependent on situations and negotiations, the meaning of a word cannot, in principle, be captured by a definition, even when the definition is supported by a couple of exemplary sentences. All knowledge is, we believe, like language. Its constituent parts index the world and so are inextricably a product of the activity and situations in which they are produced. A concept, for example, will continually evolve with each new occasion of use, because new situations, negotiations, and activities inevitably recast it in a new, more densely textured form. So a concept, like the meaning of a word, is always under construction. This would also appear to be true of apparently well-defined, abstract technical concepts. Even these are not wholly definable and defy categorical description; part of their meaning is always inherited from the context of use. Learning and tools. To explore the idea that concepts are both situated and progressively developed through activity, we should abandon any notion that they are abstract, self-contained entities. Instead, it may be more useful to consider conceptual knowledge as, in some ways, similar to a set of tools. Tools share several significant features with knowledge: They can only be fully understood through use, and using them entails both changing the user's view of the world and adopting the belief system of the culture in which they are used. First, if knowledge is thought of as tools, we can illustrate Whitehead's (1929) distinction between the mere acquisition of inert concepts and the development of useful, robust knowledge. It is quite possible to acquire a tool but to be unable to use it. Similarly, it is common for students to acquire algorithms, routines, and decontextualized definitions that they cannot use and that, therefore, lie inert. Unfortunately, this problem is not always apparent. Old-fashioned pocket knives, for example, have a device for removing stones from horses' hooves. People with this device may know its use and be able to talk wisely about horses, hooves, and stones. But they may never betray --or even recognize --that they would not begin to know how to use this implement on a horse. Similarly, students can often manipulate algorithms, routines, and definitions they have acquired with apparent competence and yet not reveal, to their teachers or themselves, that they would have no idea what to do if they came upon the domain equivalent of a limping horse. People who use tools actively rather than just acquire them, by contrast, build an increasingly rich implicit understanding of the world in which they use the tools and of the tools themselves. The understanding, both of the world and of the tool, continually changes as a result of their interaction. Learning and acting are interestingly indistinct, learning being a continuous, life-long process resulting from acting in situations. Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions for use arise directly out of the context of activities of each community that uses the tool, framed by the way members of that community see the world. The community and its viewpoint, quite as much as the tool itself, determine how a tool is used. Thus, carpenters and cabinet makers use chisels differently. Because tools and the way they are used reflect the particular accumulated insights of communities, it is not ", "title": "" }, { "docid": "dbcfb877dae759f9ad1e451998d8df38", "text": "Detection and tracking of humans in video streams is important for many applications. We present an approach to automatically detect and track multiple, possibly partially occluded humans in a walking or standing pose from a single camera, which may be stationary or moving. A human body is represented as an assembly of body parts. Part detectors are learned by boosting a number of weak classifiers which are based on edgelet features. Responses of part detectors are combined to form a joint likelihood model that includes an analysis of possible occlusions. The combined detection responses and the part detection responses provide the observations used for tracking. Trajectory initialization and termination are both automatic and rely on the confidences computed from the detection responses. An object is tracked by data association and meanshift methods. Our system can track humans with both inter-object and scene occlusions with static or non-static backgrounds. Evaluation results on a number of images and videos and comparisons with some previous methods are given.", "title": "" }, { "docid": "9f538d6f447f1e536b7109620156cdf7", "text": "We present a demonstration of Ropossum, an authoring tool for the generation and testing of levels of the physics-based game, Cut the Rope. Ropossum integrates many features: (1) automatic design of complete solvable content, (2) incorporation of designer’s input through the creation of complete or partial designs, (3) automatic check for playability and (4) optimization of a given design based on playability. The system includes a physics engine to simulate the game and an evolutionary framework to evolve content as well as an AI reasoning agent to check for playability. The system is optimised to allow on-line feedback and realtime interaction.", "title": "" }, { "docid": "6ef244a7eb6a5df025e282e1cc5f90aa", "text": "Public infrastructure-as-a-service clouds, such as Amazon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the scheduling of shared resources. Recent works have shown how to mount crossVM side-channel attacks to steal cryptographic secrets. The straightforward solution is hard isolation that dedicates hardware to each VM. However, this comes at the cost of reduced efficiency. We investigate the principle of soft isolation: reduce the risk of sharing through better scheduling. With experimental measurements, we show that a minimum run time (MRT) guarantee for VM virtual CPUs that limits the frequency of preemptions can effectively prevent existing Prime+Probe cache-based side-channel attacks. Through experimental measurements, we find that the performance impact of MRT guarantees can be very low, particularly in multi-core settings. Finally, we integrate a simple per-core CPU state cleansing mechanism, a form of hard isolation, into Xen. It provides further protection against side-channel attacks at little cost when used in conjunction with an MRT guarantee.", "title": "" }, { "docid": "bdffbc914108cb74c4130345e568e543", "text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques ofmachine vision are extensively applied to agricultural science,and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured bydigital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants.Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.", "title": "" } ]
scidocsrr
0c43ae5b26291bb09219d53a9b5130db
A COMPENDIUM OF PATTERN RECOGNITION TECHNIQUES IN FACE , SPEECH AND LIE DETECTION
[ { "docid": "b7597e1f8c8ae4b40f5d7d1fe1f76a38", "text": "In this paper we present a Time-Delay Neural Network (TDNN) approach to phoneme recognition which is characterized by two important properties. 1) Using a 3 layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces. The TDNN learns these decision surfaces automatically using error backpropagation 111. 2) The time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independent of position in time and hence not blurred by temporal shifts", "title": "" } ]
[ { "docid": "9931caab2f88a29820bd2a15a01b4aad", "text": "In this work, a gas-electric hybrid quad tilt-rotor UAV with morphing wing is designed. The mechanical design, propulsion system design and control architecture are explained. Dynamic model of the aerial vehicle is developed including the effects of tilting rotors, variable fuel weight, and morphing wing lift-drag forces and pitching moments.", "title": "" }, { "docid": "b15dcda2b395d02a2df18f6d8bfa3b19", "text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.", "title": "" }, { "docid": "934c8f1bbffe43da1482af157754e2b8", "text": "We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.", "title": "" }, { "docid": "363381fbd6a5a19242a432ca80051bba", "text": "Multimedia data on social websites contain rich semantics and are often accompanied with user-defined tags. To enhance Web media semantic concept retrieval, the fusion of tag-based and content-based models can be used, though it is very challenging. In this article, a novel semantic concept retrieval framework that incorporates tag removal and model fusion is proposed to tackle such a challenge. Tags with useful information can facilitate media search, but they are often imprecise, which makes it important to apply noisy tag removal (by deleting uncorrelated tags) to improve the performance of semantic concept retrieval. Therefore, a multiple correspondence analysis (MCA)-based tag removal algorithm is proposed, which utilizes MCA's ability to capture the relationships among nominal features and identify representative and discriminative tags holding strong correlations with the target semantic concepts. To further improve the retrieval performance, a novel model fusion method is also proposed to combine ranking scores from both tag-based and content-based models, where the adjustment of ranking scores, the reliability of models, and the correlations between the intervals divided on the ranking scores and the semantic concepts are all considered. Comparative results with extensive experiments on the NUS-WIDE-LITE as well as the NUS-WIDE-270K benchmark datasets with 81 semantic concepts show that the proposed framework outperforms baseline results and the other comparison methods with each component being evaluated separately.", "title": "" }, { "docid": "5207f7a986dd1fecbe4afd0789d0628a", "text": "Characterization of driving maneuvers or driving styles through motion sensors has become a field of great interest. Before now, this characterization used to be carried out with signals coming from extra equipment installed inside the vehicle, such as On-Board Diagnostic (OBD) devices or sensors in pedals. Nowadays, with the evolution and scope of smartphones, these have become the devices for recording mobile signals in many driving characterization applications. Normally multiple available sensors are used, such as accelerometers, gyroscopes, magnetometers or the Global Positioning System (GPS). However, using sensors such as GPS increase significantly battery consumption and, additionally, many current phones do not include gyroscopes. Therefore, we propose the characterization of driving style through only the use of smartphone accelerometers. We propose a deep neural network (DNN) architecture that combines convolutional and recurrent networks to estimate the vehicle movement direction (VMD), which is the forward movement directional vector captured in a phone's coordinates. Once VMD is obtained, multiple applications such as characterizing driving styles or detecting dangerous events can be developed. In the development of the proposed DNN architecture, two different methods are compared. The first one is based on the detection and classification of significant acceleration driving forces, while the second one relies on longitudinal and transversal signals derived from the raw accelerometers. The final success rate of VMD estimation for the best method is of 90.07%.", "title": "" }, { "docid": "ec03f26e8a4708c8e9f839b3006d0231", "text": "We propose an automatic diabetic retinopathy (DR) analysis algorithm based on two-stages deep convolutional neural networks (DCNN). Compared to existing DCNN-based DR detection methods, the proposed algorithm have the following advantages: (1) Our method can point out the location and type of lesions in the fundus images, as well as giving the severity grades of DR. Moreover, since retina lesions and DR severity appear with different scales in fundus images, the integration of both local and global networks learn more complete and specific features for DR analysis. (2) By introducing imbalanced weighting map, more attentions will be given to lesion patches for DR grading, which significantly improve the performance of the proposed algorithm. In this study, we label 12, 206 lesion patches and re-annotate the DR grades of 23, 595 fundus images from Kaggle competition dataset. Under the guidance of clinical ophthalmologists, the experimental results show that our local lesion detection net achieve comparable performance with trained human observers, and the proposed imbalanced weighted scheme also be proved to significantly improve the capability of our DCNN-based DR grading algorithm.", "title": "" }, { "docid": "f84f7ad81967a6704490243b2b1fbbe4", "text": "A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions.", "title": "" }, { "docid": "cfaeeb000232ade838ad751b7b404a66", "text": "Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "28600f0ee7ca1128874e830e01a028de", "text": "This paper presents and analyzes a three-tier architecture for collecting sensor data in sparse sensor networks. Our approach exploits the presence of mobile entities (called MULEs) present in the environment. When in close range, MULEs pick up data from the sensors, buffer it, and deliver it to wired access points. This can lead to substantial power savings at the sensors as they only have to transmit over a short-range. This paper focuses on a simple analytical model for understanding performance as system parameters are scaled. Our model assumes a two-dimensional random walk for mobility and incorporates key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points), latency and the required buffer capacities on the sensors and the MULEs. The modeling and simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "61d31ebda0f9c330e5d86639e0bd824e", "text": "An electric vehicle (EV) aggregation agent, as a commercial middleman between electricity market and EV owners, participates with bids for purchasing electrical energy and selling secondary reserve. This paper presents an optimization approach to support the aggregation agent participating in the day-ahead and secondary reserve sessions, and identifies the input variables that need to be forecasted or estimated. Results are presented for two years (2009 and 2010) of the Iberian market, and considering perfect and naïve forecast for all variables of the problem.", "title": "" }, { "docid": "fef5bf498eb0da7a62a2bc1433e9bd5f", "text": "The “CRC Handbook” is well-known to anyone who has taken a college chemistry course, and CRC Press has traded on this name-familiarity to greatly expand its “Handbook” series. One of the newest entries to join titles such as the Handbook of Combinatorial Designs, the Handbook of Exact Solutions to Ordinary Differential Equations and the Handbook of Edible Weeds, is the Handbook of Graph Theory. Its editors will be familiar to many as the authors of the textbook, Graph Theory and Its Applications, which is also published by CRC Press. The handbooks about mathematics typically strive for comprehensiveness in a concise style, with sections contributed by specialists within subdisciplines. This volume runs to 1167 pages with 60 contributors providing 54 sections, organized into 11 chapters. As an indication of the topics covered, the chapter titles are Introduction to Graphs; Graph Representation; Directed Graphs; Connectivity and Traversability; Colorings and Related Topics; Algebraic Graph Theory; Topological Graph Theory; Analytic Graph Theory; Graphical Measurement; Graphs in Computer Science; Networks and Flows. Each section is organized into subsections that begin with the basic definitions and ideas, provide a few key examples and conclude with a list of facts (theorems) and remarks. Each of these items is referenced with a label (e.g. 7.7.3.F29 is the 29th Fact of Section 7.7, and can be found in Subsection 7.7.3). This makes for easy crossreferencing within the volume, and provides an easy reference system for the reader’s own use. Sections conclude with references to monographs and important research articles. And on occasion there are conjectures or open problems listed too. The author of every section has provided a glossary, which the editors have coalesced into separate glossaries for each of the eleven chapters. The editors have also strived for uniform terminology and notation throughout, and where this is impossible, the distinctions, subtleties or conflicts between subdisciplines have been carefully highlighted. These types of handbooks shine when one cannot remember that the Ramsey number R(5, 14) is only known to be bounded between 221 and 1280, or one cannot recall (or never knew) what an irredundance number is. For these sorts of questions, the believable claim of 90% content coverage should guarantee frequent success when it is consulted. The listed facts never include any proofs, and many do not include any reference to the literature. Presumably some of them are trivialities, but they could all use some pointer to where one can find a proof. The editors are proud of how long the bibliographies are, but sometimes they are too short. In most every case, there could be more guidance about which elements of the bibliography are the most useful for further general investigations into a topic. An advanced graduate student or researcher of graph theory will find a book of this sort invaluable. Within their specialty the coverage might be considered skimpy. However, for those occasions when ideas or results from an allied specialty are of interest, or only if one is curious about exactly what some topic involves, or what is known about it, then consulting this volume will answer many simple questions quickly. Similarly, someone in a related discipline, such as cryptography or computer science, whose work requires some knowledge of the state-of-the-art in graph theory, will also find this a good volume to consult for quick, easily located, answers. Given that it summarizes a field where over 1,000 papers are published each year, it is a must-have for the well-equipped mathematics research library.", "title": "" }, { "docid": "0389a49d23b72bf29c0a186de9566939", "text": "IEEE 1451 has been around for almost 20 years and in that time it has seen many changes in the world of smart sensors. One of the most distinct paradigms to arise was the Internet-of-Things and with it, the popularity of light-weight and simple to implement communication protocols. One of these protocols in particular, MQ Telemetry Transport has become synonymous with large cloud service providers such as Amazon Web Services, IBM Watson, and Microsoft Azure, along with countless other services. While MQTT had be traditionally used in controlled networks within server centers, the simplicity of the protocol has caused it to be utilized on the open internet. Now being called the language of the IoT, it seems obvious that any standard that is aiming to bring a common network service layer to the IoT architecture should be able to utilize MQTT. This paper proposes potential methodologies to extend the Common Architectures and Network services found in the IEEE 1451 Family of Standard into applications which utilize MQTT.", "title": "" }, { "docid": "b720df1467aade5dd1ba82602ba14591", "text": "Modern medical devices and equipment have become very complex and sophisticated and are expected to operate under stringent environments. Hospitals must ensure that their critical medical devices are safe, accurate, reliable and operating at the required level of performance. Even though the importance, the application of all inspection, maintenance and optimization models to medical devices is fairly new. In Canada, most, if not all healthcare organizations include all their medical equipment in their maintenance program and just follow manufacturers’ recommendations for preventative maintenance. Then, current maintenance strategies employed in hospitals and healthcare organizations have difficulty in identifying specific risks and applying optimal risk reduction activities. This paper addresses these gaps found in literature for medical equipment inspection and maintenance and reviews various important aspects including current policies applied in hospitals. Finally we suggest future research which will be the starting point to develop tools and policies for better medical devices management in the future.", "title": "" }, { "docid": "cd6e9587aa41f95768d6c146df82c50f", "text": "This paper deals with genetic algorithm implementation in Python. Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. In genetic algorithms, a solution is represented by a list or a string. List or string processing in Python is more productive than in C/C++/Java. Genetic algorithms implementation in Python is quick and easy. In this paper, we introduce genetic algorithm implementation methods in Python. And we discuss various tools for speeding up Python programs.", "title": "" }, { "docid": "ebcff53d86162e30c43b58ae03e786a0", "text": "The adjustment of probabilistic models for sentiment analysis to changes in language use and the perception of products can be realized via incremental learning techniques. We provide a free, open and GUI-based sentiment analysis tool that allows for a) relabeling predictions and/or adding labeled instances to retrain the weights of a given model, and b) customizing lexical resources to account for false positives and false negatives in sentiment lexicons. Our results show that incrementally updating a model with information from new and labeled instances can substantially increase accuracy. The provided solution can be particularly helpful for gradually refining or enhancing models in an easily accessible fashion while avoiding a) the costs for training a new model from scratch and b) the deterioration of prediction accuracy over time.", "title": "" }, { "docid": "eae5470d2b5cfa6a595ee335a25c7b68", "text": "For uplink large-scale MIMO systems, linear minimum mean square error (MMSE) signal detection algorithm is near-optimal but involves matrix inversion with high complexity. In this paper, we propose a low-complexity signal detection algorithm based on the successive overrelaxation (SOR) method to avoid the complicated matrix inversion. We first prove a special property that the MMSE filtering matrix is symmetric positive definite for uplink large-scale MIMO systems, which is the premise for the SOR method. Then a low-complexity iterative signal detection algorithm based on the SOR method as well as the convergence proof is proposed. The analysis shows that the proposed scheme can reduce the computational complexity from O(K3) to O(K2), where K is the number of users. Finally, we verify through simulation results that the proposed algorithm outperforms the recently proposed Neumann series approximation algorithm, and achieves the near-optimal performance of the classical MMSE algorithm with a small number of iterations.", "title": "" }, { "docid": "342b72bf32937104ae80ae275c8c9585", "text": "In this paper, we introduce a Radio Frequency IDentification (RFID) based smart shopping system, KONARK, which helps users to checkout items faster and to track purchases in real-time. In parallel, our solution also provides the shopping mall owner with information about user interest on particular items. The central component of KONARK system is a customized shopping cart having a RFID reader which reads RFID tagged items. To provide check-out facility, our system detects in-cart items with almost 100% accuracy within 60s delay by exploiting the fact that the physical level information (RSSI, phase, doppler, read rate etc.) of in-cart RFID tags are different than outside tags. KONARK also detects user interest with 100% accuracy by exploiting the change in physical level parameters of RFID tag on the object user interacted with. In general, KONARK has been shown to perform with reasonably high accuracy in different mobility speeds in a mock-up of a shopping mall isle.", "title": "" }, { "docid": "362ce6581dee5023c9d548b634153345", "text": "In most practical situations, the compression or transmission of images and videos creates distortions that will eventually be perceived by a human observer. Vice versa, image and video restoration techniques, such as inpainting or denoising, aim to enhance the quality of experience of human viewers. Correctly assessing the similarity between an image and an undistorted reference image as subjectively experienced by a human viewer can thus lead to significant improvements in any transmission, compression, or restoration system. This paper introduces the Haar wavelet-based perceptual similarity index (HaarPSI), a novel and computationally inexpensive similarity measure for full reference image quality assessment. The HaarPSI utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas. The consistency of the HaarPSI with the human quality of experience was validated on four large benchmark databases containing thousands of differently distorted images. On these databases, the HaarPSI achieves higher correlations with human opinion scores than state-of-the-art full reference similarity measures like the structural similarity index (SSIM), the feature similarity index (FSIM), and the visual saliency-based index (VSI). Along with the simple computational structure and the short execution time, these experimental results suggest a high applicability of the HaarPSI in real world tasks.", "title": "" }, { "docid": "23737f898d9b50ff7741096a59054759", "text": "We present a new method for speech denoising and robust speech recognition. Using the framework of probabilistic models allows us to integrate detailed speech models and models of realistic non-stationary noise signals in a principled manner. The framework transforms the denoising problem into a problem of Bayes-optimal signal estimation, producing minimum mean square error estimators of desired features of clean speech from noisy data. We describe a fast and efficient implementation of an algorithm that computes these estimators. The effectiveness of this algorithm is demonstrated in robust speech recognition experiments, using the Wall Street Journal speech corpus and Microsoft Whisper large-vocabulary continuous speech recognizer. Results show significantly lower word error rates than those under noisy-matched condition. In particular, when the denoising algorithm is applied to the noisy training data and subsequently the recognizer is retrained, very low error rates are obtained.", "title": "" }, { "docid": "f435f4db05c4dc387239709f3b6f414b", "text": "The present paper argues for the notion that when attention is spread across the visual field in the first sweep of information through the brain visual selection is completely stimulus-driven. Only later in time, through recurrent feedback processing, volitional control based on expectancy and goal set will bias visual selection in a top-down manner. Here we review behavioral evidence as well as evidence from ERP, fMRI, TMS and single cell recording consistent with stimulus-driven selection. Alternative viewpoints that assume a large role for top-down processing are discussed. It is argued that in most cases evidence supporting top-down control on visual selection in fact demonstrates top-down control on processes occurring later in time, following initial selection. We conclude that top-down knowledge regarding non-spatial features of the objects cannot alter the initial selection priority. Only by adjusting the size of the attentional window, the initial sweep of information through the brain may be altered in a top-down way.", "title": "" } ]
scidocsrr
6cb1e299d4a4996200fa5c7a0cead19c
Low-Cost Inkjet-Printed Fully Passive RFID Tags for Calibration-Free Capacitive/Haptic Sensor Applications
[ { "docid": "b9a2a41e12e259fbb646ff92956e148e", "text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.", "title": "" } ]
[ { "docid": "15b38be44110ded3407b152af2f65457", "text": "What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.", "title": "" }, { "docid": "0186399810b1ed117a23bf50d544d1c6", "text": "Mobile cloud computing (MCC) is a relatively new concept that leverages the combination of cloud technology, mobile computing, and wireless networking to enrich the usability experiences of mobile users. Many field of application such as mobile health, mobile learning, mobile commerce and mobile entertainment are now taking advantage of MCC technologies. Since MCC is new, there is need to advance research in MCC in order to deepen practice. Currently, what exist are mostly descriptive literature reviews in the area of MCC. In this paper, a systematic literature review (SLR), which offers a structured, methodical, and rigorous approach to the understanding of the trend of research in MCC, and the least and most researched issue is presented. The objective of the study is to provide a credible intellectual guide for upcoming researchers in MCC to help them identify areas in MCC research where they can make the most impact. The SLR was limited to peer-reviewed conference papers and journal articles published from 2002 to 2014. The study reveals that privacy, security and trust in MCC are the least researched, whereas issues of architecture, context awareness and data management have been averagely researched, while issues on operations, end users, service and applications have received a lot of attention in the literature.", "title": "" }, { "docid": "c4062390a6598f4e9407d29e52c1a3ed", "text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.", "title": "" }, { "docid": "ac777c89315bfce3034fbb1cd2f3ba52", "text": "Studies of qualitative assessment of organizational processes (e.g., safety audits and performance indicators) and their incorporation into risk models have been based on a ‘normative view’ that decomposes organizations into separate processes that are likely to fail and lead to accidents. This paper discusses a control theoretic framework of organizational safety that views accidents as a result of performance variability of human behaviors and organizational processes whose complex interactions and coincidences lead to adverse events. Safety-related tasks managed by organizational processes are examined from the perspective of complexity and coupling. This allows safety analysts to look deeper into the complex interactions of organizational processes and how these may remain hidden or migrate toward unsafe boundaries. A taxonomy of variability of organizational processes is proposed and challenges in managing adaptability are discussed. The proposed framework can be used for studying interactions between organizational processes, changes of priorities over time, delays in effects, reinforcing influences, and long-term changes of processes. These dynamic organizational interactions are visualized with the use of system dynamics. The framework can provide a new basis for modeling organizational factors in risk analysis, analyzing accidents and designing safety reporting systems.", "title": "" }, { "docid": "4e28055d48d6c00aebb7ddb6a287636d", "text": "BACKGROUND\nIt is commonly assumed that motion sickness caused by moving visual scenes arises from the illusion of self-motion (i.e., vection).\n\n\nHYPOTHESES\nBoth studies reported here investigated whether sickness and vection were correlated. The first study compared sickness and vection created by real and virtual visual displays. The second study investigated whether visual fixation to suppress eye movements affected motion sickness or vection.\n\n\nMETHOD\nIn the first experiment subjects viewed an optokinetic drum and a virtual simulation of the optokinetic drum. The second experiment investigated two conditions on a virtual display: a) moving black and white stripes; and b) moving black and white stripes with a stationary cross on which subjects fixated to reduce eye movements.\n\n\nRESULTS\nIn the first study, ratings of motion sickness were correlated between the conditions (real and the virtual drum), as were ratings of vection. With both conditions, subjects with poor visual acuity experienced greater sickness. There was no correlation between ratings of vection and ratings of sickness in either condition. In the second study, fixation reduced motion sickness but had no affect on vection. Motion sickness was correlated with visual acuity without fixation, but not with fixation. Again, there was no correlation between vection and motion sickness.\n\n\nCONCLUSIONS\nVection is not the primary cause of sickness with optokinetic stimuli. Vection appears to be influenced by peripheral vision whereas motion sickness is influenced by central vision. When the eyes are free to track moving stimuli, there is an association between visual acuity and motion sickness. Virtual displays can create vection and may be used to investigate visually induced motion sickness.", "title": "" }, { "docid": "69c223a3732005111abecd116e0ea390", "text": "The present study examines age-related changes in skeletal muscle size and function after 12 yr. Twelve healthy sedentary men were studied in 1985-86 (T1) and nine (initial mean age 65.4 +/- 4.2 yr) were reevaluated in 1997-98 (T2). Isokinetic muscle strength of the knee and elbow extensors and flexors showed losses (P < 0.05) ranging from 20 to 30% at slow and fast angular velocities. Computerized tomography (n = 7) showed reductions (P < 0.05) in the cross-sectional area (CSA) of the thigh (12.5%), all thigh muscles (14.7%), quadriceps femoris muscle (16.1%), and flexor muscles (14. 9%). Analysis of covariance showed that strength at T1 and changes in CSA were independent predictors of strength at T2. Muscle biopsies taken from vastus lateralis muscles (n = 6) showed a reduction in percentage of type I fibers (T1 = 60% vs. T2 = 42%) with no change in mean area in either fiber type. The capillary-to-fiber ratio was significantly lower at T2 (1.39 vs. 1. 08; P = 0.043). Our observations suggest that a quantitative loss in muscle CSA is a major contributor to the decrease in muscle strength seen with advancing age and, together with muscle strength at T1, accounts for 90% of the variability in strength at T2.", "title": "" }, { "docid": "19cb14825c6654101af1101089b66e16", "text": "Critical infrastructures, such as power grids and transportation systems, are increasingly using open networks for operation. The use of open networks poses many challenges for control systems. The classical design of control systems takes into account modeling uncertainties as well as physical disturbances, providing a multitude of control design methods such as robust control, adaptive control, and stochastic control. With the growing level of integration of control systems with new information technologies, modern control systems face uncertainties not only from the physical world but also from the cybercomponents of the system. The vulnerabilities of the software deployed in the new control system infrastructure will expose the control system to many potential risks and threats from attackers. Exploitation of these vulnerabilities can lead to severe damage as has been reported in various news outlets [1], [2]. More recently, it has been reported in [3] and [4] that a computer worm, Stuxnet, was spread to target Siemens supervisory control and data acquisition (SCADA) systems that are configured to control and monitor specific industrial processes.", "title": "" }, { "docid": "5326c50e75dfd32c6e25b57bf96e1ee1", "text": "We present PPF-FoldNet for unsupervised learning of 3D local descriptors on pure point cloud geometry. Based on the folding-based auto-encoding of well known point pair features, PPF-FoldNet offers many desirable properties: it necessitates neither supervision, nor a sensitive local reference frame, benefits from point-set sparsity, is end-to-end, fast, and can extract powerful rotation invariant descriptors. Thanks to a novel feature visualization, its evolution can be monitored to provide interpretable insights. Our extensive experiments demonstrate that despite having six degree-of-freedom invariance and lack of training labels, our network achieves state of the art results in standard benchmark datasets and outperforms its competitors when rotations and varying point densities are present. PPF-FoldNet achieves 9% higher recall on standard benchmarks, 23% higher recall when rotations are introduced into the same datasets and finally, a margin of > 35% is attained when point density is significantly decreased.", "title": "" }, { "docid": "f629f426943b995a304f3d35b7090cda", "text": "We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads text as bytes and outputs span annotations of the form [start, length, label] where start positions, lengths, and labels are separate entries in our vocabulary. Because we operate directly on unicode bytes rather than languagespecific words or characters, we can analyze text in many languages with a single model. Due to the small vocabulary size, these multilingual models are very compact, but produce results similar to or better than the state-ofthe-art in Part-of-Speech tagging and Named Entity Recognition that use only the provided training datasets (no external data sources). Our models are learning “from scratch” in that they do not rely on any elements of the standard pipeline in Natural Language Processing (including tokenization), and thus can run in standalone fashion on raw text.", "title": "" }, { "docid": "253fb54d00d50a407452fff881390ba1", "text": "In this work, we investigate the effects of the cascade architecture of dilated convolutions and the deep network architecture of multi-resolution input images on the accuracy of semantic segmentation. We show that a cascade of dilated convolutions is not only able to efficiently capture larger context without increasing computational costs, but can also improve the localization performance. In addition, the deep network architecture for multi-resolution input images increases the accuracy of semantic segmentation by aggregating multi-scale contextual information. Furthermore, our fully convolutional neural network is coupled with a model of fully connected conditional random fields to further remove isolated false positives and improve the prediction along object boundaries. We present several experiments on two challenging image segmentation datasets, showing substantial improvements over strong baselines.", "title": "" }, { "docid": "1436e4fddc73d33a6cf83abfa5c9eb02", "text": "The aim of our study was to provide a contribution to the research field of the critical success factors (CSFs) of ERP projects, with specific focus on smaller enterprises (SMEs). Therefore, we conducted a systematic literature review in order to update the existing reviews of CSFs. On the basis of that review, we led several interviews with ERP consultants experienced with ERP implementations in SMEs. As a result, we showed that all factors found in the literature also affected the success of ERP projects in SMEs. However, within those projects, technological factors gained much more importance compared to the factors that most influence the success of larger ERP projects. For SMEs, factors like the Organizational fit of the ERP system as well as ERP system tests were even more important than Top management support or Project management, which were the most important factors for large-scale companies.", "title": "" }, { "docid": "27655e7db2ffe0b298278219f484ac4f", "text": "This paper proposes a novel compact balanced-to-unbalanced bandpass filter. Firstly, a pre-design circuit is presented, which is composed of an inductive coupled-line bandpass filter and an out-of-phase capacitive coupled-line bandpass filter. A novel compact circuit with three coupled lines configuration, derived from the pre-design circuit, is then proposed for miniaturizing the balanced-to-unbalanced bandpass filter. A 2.4-GHz multilayer ceramic chip type balanced-to-unbalanced bandpass filter with a size of 2.0 mm times 1.2 mm times 0.7 mm is developed to validate the feasibility of the proposed structure. The filter is designed by using circuit simulation, as well as full-wave electromagnetic simulation softwares, and fabricated by the use of low-temperature co-fired ceramic technology. The measured results agree quite well with the simulated. According to the measurement results, the maximum insertion loss is 1.65 dB, the maximum in-band phase imbalance is within 3deg, and the maximum in-band magnitude imbalance is less than 0.32 dB.", "title": "" }, { "docid": "369af16d8d6bcaaa22b1ef727768e5e3", "text": "We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.", "title": "" }, { "docid": "ad78f226f21bd020e625659ad3ddbf74", "text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.", "title": "" }, { "docid": "edf744b475ec90a123685b4f178506c0", "text": "Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, web servers are a popular target for hackers. To mitigate the security exposure associated with web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. This paper presents WebSTAT, an intrusion detection system that analyzes web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the web server. By correlating different streams of events, it is possible to achieve more effective detection of web-based attacks.", "title": "" }, { "docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3", "text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.", "title": "" }, { "docid": "326246fd723fba699a9ae2219082b522", "text": "Metadata in the Haystack environment is expressed according to the Resource Description Framework (RDF) (RDF, 1998). In essence, RDF is a format for describing semantic networks or directed graphs with labeled edges. Nodes and edges are named with uniform resource identifiers (URIs), making them globally unique and thus useful in a distributed environment. Node URIs are used to represent objects, such as web pages, people, agents, and documents. A directed edge connecting two nodes expresses a relationship, given by the URI of the edge.", "title": "" }, { "docid": "49d1d7c47a52fdaf8d09053f63d225e6", "text": "Theory of language, communicative competence, functional account of language use, discourse analysis and social-linguistic considerations have mainly made up the theoretical foundations of communicative approach to language teaching. The principles contain taking communication as the center, reflecting Real Communicating Process, avoiding Constant Error-correcting, and putting grammar at a right place.", "title": "" }, { "docid": "13f2935248240d32452030d21f82b9df", "text": "Policy optimization is a core component of reinforcement learning (RL), and most existing RL methods directly optimize parameters of a policy based on maximizing the expected total reward, or its surrogate. Though often achieving encouraging empirical success, its underlying mathematical principle on policy-distribution optimization is unclear. We place policy optimization into the space of probability measures, and interpret it as Wasserstein gradient flows. On the probabilitymeasure space, under specified circumstances, policy optimization becomes a convex problem in terms of distribution optimization. To make optimization feasible, we develop efficient algorithms by numerically solving the corresponding discrete gradient flows. Our technique is applicable to several RL settings, and is related to many state-ofthe-art policy-optimization algorithms. Empirical results verify the effectiveness of our framework, often obtaining better performance compared to related algorithms.", "title": "" } ]
scidocsrr
a5b7048b8c03c8f9a046d9ff182c953a
Agency informing techniques: communicating player agency in interactive narratives
[ { "docid": "f348748d56ee099c5f30a2629c878f37", "text": "Agency in interactive narrative is often narrowly understood as a user’s freedom to either perform virtually embodied actions or alter the mechanics of narration at will, followed by an implicit assumption of “the more agency the better.” This paper takes notice of a broader range of agency phenomena in interactive narrative and gaming that may be addressed by integrating accounts of agency from diverse fields such as sociology of science, digital media studies, philosophy, and cultural theory. The upshot is that narrative agency is contextually situated, distributed between the player and system, and mediated through user interpretation of system behavior and system affordances for user actions. In our new and developing model of agency play, multiple dimensions of agency can be tuned during story execution as a narratively situated mechanism to convey meaning. More importantly, we propose that this model of variable dimensions of agency can be used as an expressive theoretical tool for interactive narrative design. Finally, we present our current interactive narrative work under development as a case study for how the agency play model can be deployed expressively.", "title": "" } ]
[ { "docid": "c5796e3bbe9500a8a14f03873880ca09", "text": "This review highlights the latest developments associated with the use of the Normalized Difference Vegetation Index (NDVI) in ecology. Over the last decade, the NDVI has proven extremely useful in predicting herbivore and non-herbivore distribution, abundance and life history traits in space and time. Due to the continuous nature of NDVI since mid-1981, the relative importance of different temporal and spatial lags on population performance can be assessed, widening our understanding of population dynamics. Previously thought to be most useful in temperate environments, the utility of this satellite-derived index has been demonstrated even in sparsely vegetated areas. Climate models can be used to reconstruct historical patterns in vegetation dynamics in addition to anticipating the effects of future environmental change on biodiversity. NDVI has thus been established as a crucial tool for assessing past and future population and biodiversity consequences of change in climate, vegetation phenology and primary productivity.", "title": "" }, { "docid": "bbb6b192974542b165d3f7a0d139a8e1", "text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.", "title": "" }, { "docid": "8808a0d628ac2a6b352c90f60457a718", "text": "Software architectures shift the focus of developers from lines-of-code to coarser-grained elements and their interconnection structure. Architecture description languages (ADLs) have been proposed as domain-specific languages for the domain of software architecture. There is still little consensus in the research community on what problems are most important to address in a study of software architecture, what aspects of an architecture should be modeled in an ADL, or even what an ADL is. To shed light on these issues, we provide a framework of architectural domains, or areas of concern in the study of software architectures. We evaluate existing ADLs with respect to the framework and study the relationship between architectural and application domains. One conclusion is that, while the architectural domains perspective enables one to approach architectures and ADLs in a new, more structured manner, further understanding of architectural domains, their tie to application domains, and their specific influence on ADLs is needed.", "title": "" }, { "docid": "b73faefcb1a9abbf10b49f6d9e7cc360", "text": "Conditional Batch Normalization (CBN) has proved to be an effective tool for visual question answering. However, previous CBN approaches fuse the linguistic information into image features via a simple affine transformation, thus they have struggled on compositional reasoning and object counting in images. In this paper, we propose a novel CBN method using the Kronecker transformation, termed as Conditional Kronecker Batch Normalization (CKBN). CKBN layer facilitates the explicit and expressive learning of compositional reasoning and robust counting in original images. Besides, we demonstrate that the Kronecker transformation in CKBN layer is a generalization of the affine transformation in prior CBN approaches. It could accelerate the fusion of visual and linguistic information, and thus the convergence of overall model. Experiment results show that our model significantly outperforms previous CBN methods (e.g. FiLM) in compositional reasoning, counting as well as the convergence speed on CLEVR dataset.", "title": "" }, { "docid": "858acbd02250ff2f8325786475b4f3f3", "text": "One of the most important aspects of Grice’s theory of conversation is the drawing of a borderline between what is said and what is implicated. Grice’s views concerning this borderline have been strongly and influentially criticised by relevance theorists. In particular, it has become increasingly widely accepted that Grice’s notion of what is said is too limited, and that pragmatics has a far larger role to play in determining what is said than Grice would have allowed. (See for example Bezuidenhuit 1996; Blakemore 1987; Carston 1991; Recanati 1991, 1993, 2001; Sperber and Wilson 1986; Wilson and Sperber 1981.) In this paper, I argue that the rejection of Grice has moved too swiftly, as a key line of objection which has led to this rejection is flawed. The flaw, we will see, is that relevance theorists rely on a misunderstanding of Grice’s project in his theory of conversation. I am not arguing that Grice’s versions of saying and implicating are right in all details, but simply that certain widespread reasons for rejecting his theory are based on misconceptions.1 Relevance theorists, I will suggest, systematically misunderstand Grice by taking him to be engaged in the same project that they are: making sense of the psychological processes by which we interpret utterances. Notions involved with this project will need to be ones that are relevant to the psychology of utterance interpretation. Thus, it is only reasonable that relevance theorists will require that what is said and what is implicated should be psychologically real to the audience. (We will see that this requirement plays a crucial role in their arguments against Grice.) Grice, I will argue, was not pursuing this project. Rather, I will suggest that he was trying to make sense of quite a different notion of what is said: one on which both speaker and audience may be wrong about what is said. On this sort of notion, psychological reality is not a requirement. So objections to Grice based on a requirement of psychological reality will fail.", "title": "" }, { "docid": "904301bf2655364cd170f8a463ba1599", "text": "Received Aug 22, 2017 Revised Nov 12, 2017 Accepted Dec 1, 2017 This research investigates the application of texture features for leaf recognition for herbal plant identification. Malaysia is rich with herbal plants but not many people can identify them and know about their uses. Preservation of the knowledge of these herb plants is important since it enables the general public to gain useful knowledge which they can apply whenever necessary. Leaf image is chosen for plant recognition since it is available and visible all the time. Unlike flowers that are not always available or roots that are not visible and not easy to obtain, leaf is the most abundant type of data available in botanical reference collections. A comparative study has been conducted among three popular texture features that are Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) and Speeded-Up Robust Features (SURF) with multiclass Support Vector Machine (SVM) classifier. A new leaf dataset has been constructed from ten different herb plants. Experimental results using the new constructed dataset and Flavia, an existing dataset, indicate that HOG and LBP produce similar leaf recognition performance and they are better than SURF.", "title": "" }, { "docid": "3b2aa97c0232857dffa971d9c040d430", "text": "This paper provides a critical analysis of Mobile Learning projects published before the end of 2007. The review uses a Mobile Learning framework to evaluate and categorize 102 Mobile Learning projects, and to briefly introduce exemplary projects for each category. All projects were analysed with the criteria: context, tools, control, communication, subject and objective. Although a significant number of projects have ventured to incorporate the physical context into the learning experience, few projects include a socializing context. Tool support ranges from pure content delivery to content construction by the learners. Although few projects explicitly discuss the Mobile Learning control issues, one can find all approaches from pure teacher control to learner control. Despite the fact that mobile phones initially started as a communication device, communication and collaboration play a surprisingly small role in Mobile Learning projects. Most Mobile Learning projects support novices, although one might argue that the largest potential is supporting advanced learners. All results show the design space and reveal gaps in Mobile Learning research.", "title": "" }, { "docid": "0e8bd7fafa6bda51f6e42801e2e56476", "text": "In order to resist the adverse effect of viewpoint variations for improving vehicle re-identification performance, we design quadruple directional deep learning networks to extract quadruple directional deep learning features (QD-DLF) of vehicle images. The quadruple directional deep learning networks are with similar overall architecture, including the same basic deep learning architecture but different directional feature pooling layers. Specifically, the same basic deep learning architecture is a shortly and densely connected convolutional neural network to extract basic feature maps of an input square vehicle image in the first stage. Then, the quadruple directional deep learning networks utilize different directional pooling layers, i.e., horizontal average pooling (HAP) layer, vertical average pooling (VAP) layer, diagonal average pooling (DAP) layer and anti-diagonal average pooling (AAP) layer, to compress the basic feature maps into horizontal, vertical, diagonal and anti-diagonal directional feature maps, respectively. Finally, these directional feature maps are spatially normalized and concatenated together as a quadruple directional deep learning feature for vehicle re-identification. Extensive experiments on both VeRi and VehicleID databases show that the proposed QD-DLF approach outperforms multiple state-of-the-art vehicle re-identification methods.", "title": "" }, { "docid": "a49ea9c9f03aa2d926faa49f4df63b7a", "text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.", "title": "" }, { "docid": "4c8ab4d4057353b011d209ad0a27fa1d", "text": "Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8× and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2× to 7×.", "title": "" }, { "docid": "03869f2ac07c13bbce6af743ea5d2551", "text": "In this paper we present a novel vehicle detection method in traffic surveillance scenarios. This work is distinguished by three key contributions. First, a feature fusion backbone network is proposed to extract vehicle features which has the capability of modeling geometric transformations. Second, a vehicle proposal sub-network is applied to generate candidate vehicle proposals based on multi-level semantic feature maps. Finally, a head network is used to refine the categories and locations of these proposals. Benefits from the above cues, vehicles with large variation in occlusion and lighting conditions can be detected with high accuracy. Furthermore, the method also demonstrates robustness in the case of motion blur caused by rapid movement of vehicles. We test our network on DETRAC[21] benchmark detection challenge and it shows the state-of-theart performance. Specifically, the proposed method gets the best performances not only at 4 different level: overall, easy, medium and hard, but also in sunny, cloudy and night conditions.", "title": "" }, { "docid": "8b8248d4f2db9ef3f06b4138dd9c6dec", "text": "This review provides an introduction to two eyetracking measures that can be used to study cognitive development and plasticity: pupil dilation and spontaneous blink rate. We begin by outlining the rich history of gaze analysis, which can reveal the current focus of attention as well as cognitive strategies. We then turn to the two lesser-utilized ocular measures. Pupil dilation is modulated by the brain's locus coeruleus-norepinephrine system, which controls physiological arousal and attention, and has been used as a measure of subjective task difficulty, mental effort, and neural gain. Spontaneous eyeblink rate correlates with levels of dopamine in the central nervous system, and can reveal processes underlying learning and goal-directed behavior. Taken together, gaze, pupil dilation, and blink rate are three non-invasive and complementary measures of cognition with high temporal resolution and well-understood neural foundations. Here we review the neural foundations of pupil dilation and blink rate, provide examples of their usage, describe analytic methods and methodological considerations, and discuss their potential for research on learning, cognitive development, and plasticity.", "title": "" }, { "docid": "1cfcc98bcf1e7be84a4e5f984327cb96", "text": "It is approximately 50 years since the first computational experiments were conducted in what has become known today as the field of Genetic Programming (GP), twenty years since John Koza named and popularised the method, and ten years since the first issue appeared of the Genetic Programming & Evolvable Machines journal. In particular, during the past two decades there has been a significant range and volume of development in the theory and application of GP, and in recent years the field has become increasingly applied. There remain a number of significant open issues despite the successful application of GP to a number of challenging real-world problem domains and progress in the development of a theory explaining the behavior and dynamics of GP. These issues must be addressed for GP to realise its full potential and to become a trusted mainstream member of the computational problem solving toolkit. In this paper we outline some of the challenges and open issues that face researchers and practitioners of GP. We hope this overview will stimulate debate, focus the direction of future research to deepen our understanding of GP, and further the development of more powerful problem solving algorithms.", "title": "" }, { "docid": "3433b283726a7e95ba5cb2a3c97cd195", "text": "Black silicon (BSi) represents a very active research area in renewable energy materials. The rise of BSi as a focus of study for its fundamental properties and potentially lucrative practical applications is shown by several recent results ranging from solar cells and light-emitting devices to antibacterial coatings and gas-sensors. In this paper, the common BSi fabrication techniques are first reviewed, including electrochemical HF etching, stain etching, metal-assisted chemical etching, reactive ion etching, laser irradiation and the molten salt Fray-Farthing-Chen-Cambridge (FFC-Cambridge) process. The utilization of BSi as an anti-reflection coating in solar cells is then critically examined and appraised, based upon strategies towards higher efficiency renewable solar energy modules. Methods of incorporating BSi in advanced solar cell architectures and the production of ultra-thin and flexible BSi wafers are also surveyed. Particular attention is given to routes leading to passivated BSi surfaces, which are essential for improving the electrical properties of any devices incorporating BSi, with a special focus on atomic layer deposition of Al2O3. Finally, three potential research directions worth exploring for practical solar cell applications are highlighted, namely, encapsulation effects, the development of micro-nano dual-scale BSi, and the incorporation of BSi into thin solar cells. It is intended that this paper will serve as a useful introduction to this novel material and its properties, and provide a general overview of recent progress in research currently being undertaken for renewable energy applications.", "title": "" }, { "docid": "dde075f427d729d028d6d382670f8346", "text": "Using social media Web sites is among the most common activity of today's children and adolescents. Any Web site that allows social interaction is considered a social media site, including social networking sites such as Facebook, MySpace, and Twitter; gaming sites and virtual worlds such as Club Penguin, Second Life, and the Sims; video sites such as YouTube; and blogs. Such sites offer today's youth a portal for entertainment and communication and have grown exponentially in recent years. For this reason, it is important that parents become aware of the nature of social media sites, given that not all of them are healthy environments for children and adolescents. Pediatricians are in a unique position to help families understand these sites and to encourage healthy use and urge parents to monitor for potential problems with cyberbullying, \"Facebook depression,\" sexting, and exposure to inappropriate content.", "title": "" }, { "docid": "e6ca7a2a94c7006b0f2839bb31aa28f8", "text": "While the services-based model of cloud computing makes more and more IT resources available to a wider range of customers, the massive amount of data in cloud platforms is becoming a target for malicious users. Previous studies show that attackers can co-locate their virtual machines (VMs) with target VMs on the same server, and obtain sensitive information from the victims using side channels. This paper investigates VM allocation policies and practical countermeasures against this novel kind of co-resident attack by developing a set of security metrics and a quantitative model. A security analysis of three VM allocation policies commonly used in existing cloud computing platforms reveals that the server's configuration, oversubscription and background traffic have a large impact on the ability to prevent attackers from co-locating with the targets. If the servers are properly configured, and oversubscription is enabled, the best policy is to allocate new VMs to the server with the most VMs. Based on these results, a new strategy is introduced that effectively decreases the probability of attackers achieving co-residence. The proposed solution only requires minor changes to current allocation policies, and hence can be easily integrated into existing cloud platforms to mitigate the threat of co-resident attacks.", "title": "" }, { "docid": "a87da46ab4026c566e3e42a5695fd8c9", "text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.", "title": "" }, { "docid": "ea959ccd4eb6b6ac1d2acd2bfde7c633", "text": "This paper proposes a mixed-initiative feature engineering approach using explicit knowledge captured in a knowledge graph complemented by a novel interactive visualization method. Using the explicitly captured relations and dependencies between concepts and their properties, feature engineering is enabled in a semi-automatic way. Furthermore, the results (and decisions) obtained throughout the process can be utilized for refining the features and the knowledge graph. Analytical requirements can then be conveniently captured for feature engineering -- enabling integrated semantics-driven data analysis and machine learning.", "title": "" }, { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "ce2d4247b1072b3c593e73fe9d67cf63", "text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.", "title": "" } ]
scidocsrr
037202033ea5bbe868ec3406dc57109e
A secure location verification method for ADS-B
[ { "docid": "47c723b0c41fb26ed7caa077388e2e1b", "text": "Automatic dependent surveillance-broadcast (ADS-B) is the communications protocol currently being rolled out as part of next-generation air transportation systems. As the heart of modern air traffic control, it will play an essential role in the protection of two billion passengers per year, in addition to being crucial to many other interest groups in aviation. The inherent lack of security measures in the ADS-B protocol has long been a topic in both the aviation circles and in the academic community. Due to recently published proof-of-concept attacks, the topic is becoming ever more pressing, particularly with the deadline for mandatory implementation in most airspaces fast approaching. This survey first summarizes the attacks and problems that have been reported in relation to ADS-B security. Thereafter, it surveys both the theoretical and practical efforts that have been previously conducted concerning these issues, including possible countermeasures. In addition, the survey seeks to go beyond the current state of the art and gives a detailed assessment of security measures that have been developed more generally for related wireless networks such as sensor networks and vehicular ad hoc networks, including a taxonomy of all considered approaches.", "title": "" } ]
[ { "docid": "a1d9fef7fda8a547df136565afd5a443", "text": "The authors proposed a circular-polarized array antenna by using hexagonal radiating apertures in the 60 GHz-band. The hexagonal radiating aperture is designed, and the good axial ratio characteristics are achieved in the boresight. We analyze the full structure of the 16×16-element array that combines the 2×2-element subarrays and a 64-way divider. The reflection is less than −14dB over 4.9% bandwidth where the axial ratio is less than 2.5dB. High antenna efficiency of 88.7% is obtained at 61.5GHz with the antenna gain of 33.3dBi including losses. The 1dB-down gain bandwidth is 6.8%.", "title": "" }, { "docid": "5c05ad44ac2bf3fb26cea62d563435f8", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "c85c3ef7100714d6d08f726aa8768bb9", "text": "An adaptive Kalman filter algorithm is adopted to estimate the state of charge (SOC) of a lithium-ion battery for application in electric vehicles (EVs). Generally, the Kalman filter algorithm is selected to dynamically estimate the SOC. However, it easily causes divergence due to the uncertainty of the battery model and system noise. To obtain a better convergent and robust result, an adaptive Kalman filter algorithm that can greatly improve the dependence of the traditional filter algorithm on the battery model is employed. In this paper, the typical characteristics of the lithium-ion battery are analyzed by experiment, such as hysteresis, polarization, Coulomb efficiency, etc. In addition, an improved Thevenin battery model is achieved by adding an extra RC branch to the Thevenin model, and model parameters are identified by using the extended Kalman filter (EKF) algorithm. Further, an adaptive EKF (AEKF) algorithm is adopted to the SOC estimation of the lithium-ion battery. Finally, the proposed method is evaluated by experiments with federal urban driving schedules. The proposed SOC estimation using AEKF is more accurate and reliable than that using EKF. The comparison shows that the maximum SOC estimation error decreases from 14.96% to 2.54% and that the mean SOC estimation error reduces from 3.19% to 1.06%.", "title": "" }, { "docid": "67818e657bc3c47f2819a36ed4686c1a", "text": "The lay media and scientific literature have focused increasing attention on vitamin D deficiency and insufficiency in recent years. Low vitamin D levels confer increased an risk of abnormal bone mineralization, and are linked to poor bone health in epilepsy patients. However, vitamin D is not the only determinant of bone health in children with epilepsy. Anticonvulsant medications, in addition to features and comorbidities of epilepsy and coexisting neurologic diseases, are important factors in this complex topic. We review the basic metabolism of vitamin D in terms of bone health among children with epilepsy. We also discuss the literature regarding vitamin D and bone mineral density in this population. Finally, we suggest algorithms for screening and treating vitamin D insufficiency in these patients.", "title": "" }, { "docid": "f58a1b5f4c914a0ab3fcf3e2a8820e45", "text": "This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel.", "title": "" }, { "docid": "ba533a610f95d44bf5416e17b07348dd", "text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"", "title": "" }, { "docid": "50c0f3cdccc1fe63f3fcb4cb3c983617", "text": "Junho Yang Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: yang125@illinois.edu Ashwin Dani Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: adani@illinois.edu Soon-Jo Chung Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: sjchung@illinois.edu Seth Hutchinson Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 e-mail: seth@illinois.edu", "title": "" }, { "docid": "e9d2278132f83a27b24a42e64aa28c1b", "text": "The next generation of VR simulators could take into account a novel input: the user's mental state, as measured with electrodes and a brain-computer interface. One illustration of this promising path is a project that adapted a guidance system's force feedback to the user's mental workload in real time. A first application of this approach is a medical training simulator that provides virtual assistance that adapts to the trainee's mental activity. Such results pave the way to VR systems that will automatically reconfigure and adapt to their users' mental states and cognitive processes.", "title": "" }, { "docid": "813a38597510d0415818847f3db2374f", "text": "Numerous consumer reviews of products are now available on the Internet. Consumer reviews contain rich and valuable knowledge for both firms and users. However, the reviews are often disorganized, leading to difficulties in information navigation and knowledge acquisition. This article proposes a product aspect ranking framework, which automatically identifies the important aspects of products from online consumer reviews, aiming at improving the usability of the numerous reviews. The important product aspects are identified based on two observations: 1) the important aspects are usually commented on by a large number of consumers and 2) consumer opinions on the important aspects greatly influence their overall opinions on the product. In particular, given the consumer reviews of a product, we first identify product aspects by a shallow dependency parser and determine consumer opinions on these aspects via a sentiment classifier. We then develop a probabilistic aspect ranking algorithm to infer the importance of aspects by simultaneously considering aspect frequency and the influence of consumer opinions given to each aspect over their overall opinions. The experimental results on a review corpus of 21 popular products in eight domains demonstrate the effectiveness of the proposed approach. Moreover, we apply product aspect ranking to two real-world applications, i.e., document-level sentiment classification and extractive review summarization, and achieve significant performance improvements, which demonstrate the capacity of product aspect ranking in facilitating real-world applications.", "title": "" }, { "docid": "c1f095252c6c64af9ceeb33e78318b82", "text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.", "title": "" }, { "docid": "11c98b44c793ed963fbc7ad5fc46aa48", "text": "Power Divider (PD) design intended for feeding dipole antenna meant for Global System for Mobile Communications (GSM) 900 applications with an antenna height of 22 mm, operating in the frequency range of 880~960 MHz is presented herein. The PD provides 3 dB power division along with complementary phase at its output. The out of phase division of power divider is obtained by utilizing the concept of defected ground structures. A slot line accompanied by T-junction makes up the defected ground region, while coupled microstrip lines form the feed positions. The simulated and the measured results are in good coherence. PD shows good return loss and low insertion loss. It is observed that the dual polarized base station antenna has VSWR below 2 in the required frequency range, 10 dB gain and port to port isolation is less than 22 dB. The antenna is dual polarized and a + or - 45 degrees polarization is maintained.", "title": "" }, { "docid": "b0815caebe9373220195ac3b143abeca", "text": "This paper presents the motivation, basis and a prototype implementation of an ethical adaptor capable of using a moral affective function, guilt, as a basis for altering a robot's ongoing behavior. While the research is illustrated in the context of the battlefield, the methods described are believed generalizable to other domains such as eldercare and are potentially extensible to a broader class of moral emotions, including compassion and empathy.", "title": "" }, { "docid": "6f57ff051947be560a36c91b9901e718", "text": "This paper presents a novel approach to creating full view panoramic mosaics from image sequences. Unlike current panoramic stitching methods, which usually require pure horizontal camera panning, our system does not require any controlled motions or constraints on how the images are taken (as long as there is no strong motion parallax). For example, images taken from a hand-held digital camera can be stitched seamlessly into panoramic mosaics. Because we represent our image mosaics using a set of transforms, there are no singularity problems such as those existing at the top and bottom of cylindrical or spherical maps. Our algorithm is fast and robust because it directly recovers 3D rotations instead of general 8 parameter planar perspective transforms. Methods to recover camera focal length are also presented. We also present an algorithm for efficiently extracting environment maps from our image mosaics. By mapping the mosaic onto an artibrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players. CR", "title": "" }, { "docid": "f8d50c7fe96fdf8fbe06332ab7e1a2a6", "text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.", "title": "" }, { "docid": "080b200b211c3145c92c70478a084bfb", "text": "The computational time complexity is an important topic in t he theory of evolutionary algorithms (EAs). This paper repo rts some new results on the average time complexity of EAs. Based on rift analysis, some useful drift conditions for derivin g the time complexity of EAs are studied, including conditions un der which an EA will take no more than polynomial time (in prob lem size) to solve a problem and conditions under which an EA will take at least exponential time (in problem size) to solve a pr oblem. The paper first presents the general results, and then uses se veral problems as examples to illustrate how these general r su ts can be applied to concrete problems in analyzing the average tim complexity of EAs. While previous work only considered (1 + 1) EAs without any crossover, the EAs considered in this paper a re fairly general, which use a finite population, crossover, mutation, and selection. Index Terms Evolutionary algorithms, time complexity, random sequenc es, drift analysis, stochastic inequalities. I. I NTRODUCTION Evolutionary algorithms (EAs) are a powerful class of adapt ive search algorithms [1], [2], [3]. They have been used to so lve many combinatorial problems with success in recent years. H owever, theories on explaining why and how EAs work are still relatively few in spite of recent efforts [4]. The computati onal time complexity of EAs is largely unknown, except for a f ew simple cases [5], [6], [7], [8], [9]. Ambati et al. [6] and Fogel [7] estimated the computational time complexi ty of their EAs on the traveling salesman problem. No theoretical results w ere given. Rudolph [8] proved that (1 + 1) EAs with mutation probability pm = 1/n, wheren is the number of bits in a binary string (i.e., individual) an d pm is the mutation probability, converge in average time O(n log n) for the ONE-MAX problem. Drosteet al. [9] carried out a rigorous complexity analysis of (1+1) EAs for linear functions with Boolean inputs. Howev er, all of these results were based on EAs with a population size of 1 and without any crossover operators. Nimwegen et al. [10], [11] developed a theory which predicts the total numbe r of fitness function evaluations needed to reach a global opti mum by epochal dynamics as a function of mutation rate and population size. However, no relationship to the problem si ze was studied. Het al. [12], [13] showed that genetic algorithms (GAs) may take exponential average time to solve some decept ive roblems. This paper presents a more general theory about the average t ime complexity of EAs. The motivation of this study is to establish a general theory for a class of EAs, rather than a pa rticul r EA. The theory can then be used to derive specific complexity results for different EAs on different problems . The theory has been developed using drift analysis [14], [1 5] — a very useful technique in analyzing random sequences. It can be used to estimate the first hitting time by estimating the dr ift of a random sequence. To our best knowledge, this is the first att emp that drift analysis is introduced into the theoretical study of evolutionary computation. One of the major advantages of using drift analysis is that it is often easier to estimate th drift than to estimate the first hitting time directly. The techniq ues of drift analysis can also be applied to random sequences which are not Markovian [14]. The basic idea of this paper is as follows. We first model the ev olution of an EA population as a random sequence, e.g., a Markov chain. A population of multiple individuals will be considered. Both crossover and mutation are included in the EA. Then we analyzed the drift of this sequence to and from the optimal solution (assuming we are solving an optimization problem). Various bounds on the first hitting time will be der iv d under different drift conditions. Some drift conditio ns cause the random sequence to drift away from the optimal solution, while other drift conditions enable the sequence to drift to wards the optimal solution. We will study the conditions which are used to determine the time complexity of an EA to solve a problem, whether in polynomial time (in problem size) or in e xponential time. To illustrate the application of the above general theory, w e will apply the theoretical results to several well-known p roblems, including a classical combinatorial optimization problem — the subset sum problem. It is shown in this paper that a certa in family of subset sum problems can be solved by an EA within pol yn mial time, while other families of subset sum problems will need at least exponential time to solve. Although the EA s used in our study do not include all possible variations of E As, they do represent a fairly large class of EAs which have multi ple ndividuals and use both crossover and mutation. The rest of this paper is organized as follows: Section II int roduces briefly EAs and drift analysis. Section III studies t he conditions under which EAs can solve a problem within polyno mial time on average. A general theorem is first presented. Jun He is with Department of Computer Science, Northern Jiao tong University, Beijing 100044, P.R. China. (Email:jun.h e@ieee.org) Xin Yao is with School of Computer Science, The University of Birmingham, Birmingham B15 2TT, U.K. (Email: x.yao@cs.bha m. c.uk) DRAFT -DRAFT -DRAFT -DRAFT -DRAFT -2 Then examples, including the subset sum problem, are studie d to show the application of the theorem. Section IV studies t he conditions under which EAs need at least exponential comput ation time to solve a problem. Both a general theorem and an application of the theorem are given. Section V discusses so me weaker drift conditions for the subset sum problem. Final ly, Section VI concludes with a brief summary of the paper and som e future work. II. EVOLUTIONARY ALGORITHMS AND DRIFT ANALYSIS A. Evolutionary Algorithms The combinatorial optimization problem considered in this paper can be described as follows: Given a finite state space S and a functionf(x), x ∈ S, find max{f(x); x ∈ S}. (1) Assumex is one state with the maximum function value, and fmax = f(x). The EA for solving the combinatorial optimization problem c an be described as follows: 1) Initialization: generate, either randomly or heuristically, an initial po pulation of 2N individuals, denoted byξ0 = (x1, · · · , x2N ), and letk← 0, whereN > 0 is an integer. For any population ξk, definef(ξk) = max{f(xi); xi ∈ ξk}. 2) Generation: generate a new (intermediate) population by crossover and mutation (or any other operators for generating offspring), and denote it as ξk+1/2. 3) Selection: select and reproduce 2N individuals from populationsξk+1/2 andξk, and obtain another (new intermediate) populationξk+S . 4) If f(ξk+S) = fmax, then stop; otherwise let ξk+1 = ξk+S andk ← k + 1, and go to step 2. Obviously the above description includes a wide range of EAs using crossover, mutation and selection. The description d oes not set any restrictions on the type of crossover, mutation o r selection schemes used. It includes EAs which use crossove r or mutation alone. The EA framework given above is closer to evo lution strategies [ ?] and evolutionary programming [ ?] than to GAs [2] in the sense that selection is applied after crossove r and/or mutation. However, the main results given in this pa per, i.e., Theorems 1 and 10 are independent of any such implement ation details. In fact, they hold for virtually any stochast ic search algorithms. B. Drift Analysis Assumex is an optimal point, and let d(x, x) be the distance between a point x andx. If there are more than one optimal point (that is, a set S), we used(x, S) = min{d(x, x) : x ∈ S} as the distance between individual x and the optimal setS. In short we denote the distance by d(x). Usuallyd(x) satisfiesd(x) = 0 andd(x) > 0 for any x / ∈ S. However, in some parts of this paper, we will consider a pseudo-distance d(x) which allowsd(x) = 0 for somex / ∈ S. Given a populationX = {x1, · · · , x2N}, let d(X) = min{d(x) : x ∈ X}, (2) which is used to measure the distance of the population to the ptimal solution. The sequence{d(ξk); k = 0, 1, 2, · · · } generated by the EA is a random sequence. The sequence can be m odeled by a homogeneous Markov chain if no self-adaptation is used [16] . The drift of the random sequence {d(ξk), k = 0, 1, · · · } at timek is defined by ∆(d(ξk)) = d(ξk+1)− d(ξk). Define the stopping time of an EA as τ = min{k; d(ξk) = 0}, which is the first hitting time on the optimal solution. The task now is to investigate the relationship between the expe ct first hitting timeτ and the problem sizen. In this paper, we focus on the following question: under what conditions of th e drift ∆(d(ξk)) can we estimate the expect first hitting time E[τ ]? In particular, we study the conditions under which an EA is g uaranteed to find the optimal solution in polynomial time on average and conditions under which an EA takes at least exp on ntial time on average to find the optimal solution. The idea behind drift analysis is quite straightforward. It can be explained (by sacrificing mathematical rigor) using a deterministic algorithm as an example. Assume the distance between the starting solution and the optimal solution is d, and a deterministic algorithm is used to solve an optimization pr oblem. If the drift towards the optimal solution is greater t han ∆ at each time step (i.e., iteration), we would need at most d/∆ time steps to find the optimal solution. Hence the key issue here is to estimate∆. Sasaki and Hajek [17] have successfully used this method to estimate the time complexity of simulated annealing for the maximum matching problem.", "title": "" }, { "docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b", "text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.", "title": "" }, { "docid": "79fd1db13ce875945c7e11247eb139c8", "text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.", "title": "" }, { "docid": "0edc71f08db3160d8693ab3a5ca22025", "text": "This paper presents a method for precision patterning of polydimethylsiloxane (PDMS) membranes based on parylene C lift-off. The process permits the construction of PDMS membranes either with a highly flat, uniform top surface or with a controlled curvature. Effects of varying processing parameters on the geometrical characteristics of the PDMS membranes are described. The paper also demonstrates the application of the PDMS precision patterning method to the construction of PDMS microlens arrays, which require curved top surfaces, and a 3-axis electrostatic positioning stage that uses PDMS membranes with flat surfaces as a bonding material as well as a precisely defined spacer. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "9c68b87f99450e85f3c0c6093429937d", "text": "We present a method for activity recognition that first estimates the activity performer's location and uses it with input data for activity recognition. Existing approaches directly take video frames or entire video for feature extraction and recognition, and treat the classifier as a black box. Our method first locates the activities in each input video frame by generating an activity mask using a conditional generative adversarial network (cGAN). The generated mask is appended to color channels of input images and fed into a VGG-LSTM network for activity recognition. To test our system, we produced two datasets with manually created masks, one containing Olympic sports activities and the other containing trauma resuscitation activities. Our system makes activity prediction for each video frame and achieves performance comparable to the state-of-the-art systems while simultaneously outlining the location of the activity. We show how the generated masks facilitate the learning of features that are representative of the activity rather than accidental surrounding information.", "title": "" }, { "docid": "d449a4d183c2a3e1905935f624d684d3", "text": "This paper introduces the approach CBRDIA (Case-based Reasoning for Document Invoice Analysis) which uses the principles of case-based reasoning to analyze, recognize and interpret invoices. Two CBR cycles are performed sequentially in CBRDIA. The first one consists in checking whether a similar document has already been processed, which makes the interpretation of the current one easy. The second cycle works if the first one fails. It processes the document by analyzing and interpreting its structuring elements (adresses, amounts, tables, etc) one by one. The CBR cycles allow processing documents from both knonwn or unknown classes. Applied on 923 invoices, CBRDIA reaches a recognition rate of 85,22% for documents of known classes and 74,90% for documents of unknown classes.", "title": "" } ]
scidocsrr
1ef6c4bf5b741807fee8047feaba1d3a
Brain MRI super-resolution using deep 3D convolutional networks
[ { "docid": "3768b0373b9c2c38ad30987fbce92915", "text": "Image super-resolution (SR) aims to recover high-resolution images from their low-resolution counterparts for improving image analysis and visualization. Interpolation methods, widely used for this purpose, often result in images with blurred edges and blocking effects. More advanced methods such as total variation (TV) retain edge sharpness during image recovery. However, these methods only utilize information from local neighborhoods, neglecting useful information from remote voxels. In this paper, we propose a novel image SR method that integrates both local and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The optimization problem can be solved effectively via alternating direction method of multipliers (ADMM). Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolation, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.", "title": "" } ]
[ { "docid": "e371f9b6ed1a8799e201d6d76ba6c5a1", "text": "A 13-year-old girl with virginal hypertrophy (bilateral extensive juvenile hypertrophy) of the breasts is presented. Her breasts began to grow rapidly after puberty and reached an enormous size within a year. On examination, both breasts were greatly enlarged. Routine blood chemistry and the endocrinological investigations were normal. The computerized tomography scan of the sella was unremarkable. A bilateral reduction mammaplasty was performed, and histological analysis of the breast tissue revealed the diagnosis of virginal hypertrophy. After four months her breasts began to grow again, and a second mammaplasty was performed. After this operation, tamoxifen citrate was given to prevent recurrence for four months, and during the follow-up period of 20 months, no recurrence was noted.", "title": "" }, { "docid": "09404689f2d1620ac85966c19a2671b5", "text": "Purpose. An upsurge of pure red cell aplasia (PRCA) cases associated with subcutaneous treatment with epoetin alpha has been reported. A formulation change introduced in 1998 is suspected to be the reason for the induction of antibodies that also neutralize the native protein. The aim of this study was to detect the mechanism by which the new formulation may induce these antibodies. Methods. Formulations of epoetin were subjected to gel permeation chromatography with UV detection, and the fractions were analyzed by an immunoassay for the presence of epoetin. Results. The chromatograms showed that Eprex®/Erypo® contained micelles of Tween 80. A minute amount of epoetin (0.008-0.033% of the total epoetin content) coeluted with the micelles, as evidenced by ELISA. When 0.03% (w/v) Tween 80, corresponding to the concentration in the formulation, was added to the elution medium, the percentage of epoetin eluting before the main peak was 0.68%. Conclusions. Eprex®/Erypo® contains micelle-associated epoetin, which may be a risk factor for the development of antibodies against epoetin.", "title": "" }, { "docid": "1b6ddffacc50ad0f7e07675cfe12c282", "text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.", "title": "" }, { "docid": "b31bae9e7c95e070318df8279cdd18d5", "text": "This article focuses on the ethical analysis of cyber warfare, the warfare characterised by the deployment of information and communication technologies. It addresses the vacuum of ethical principles surrounding this phenomenon by providing an ethical framework for the definition of such principles. The article is divided in three parts. The first one considers cyber warfare in relation to the so-called information revolution and provides a conceptual analysis of this kind of warfare. The second part focuses on the ethical problems posed by cyber warfare and describes the issues that arise when Just War Theory is endorsed to address them. The final part introduces Information Ethics as a suitable ethical framework for the analysis of cyber warfare, and argues that the vacuum of ethical principles for this kind warfare is overcome when Just War Theory and Information Ethics are merged together.", "title": "" }, { "docid": "73b62ff6e2a9599d465f25e554ad0fb7", "text": "Rapid advancements in technology coupled with drastic reduction in cost of storage have resulted in tremendous increase in the volumes of stored data. As a consequence, analysts find it hard to cope with the rates of data arrival and the volume of data, despite the availability of many automated tools. In a digital investigation context where it is necessary to obtain information that led to a security breach and corroborate them is the contemporary challenge. Traditional techniques that rely on keyword based search fall short of interpreting data relationships and causality that is inherent to the artifacts, present across one or more sources of information. The problem of handling very large volumes of data, and discovering the associations among the data, emerges as an important contemporary challenge. The work reported in this paper is based on the use of metadata associations and eliciting the inherent relationships. We study the metadata associations methodology and introduce the algorithms to group artifacts. We establish that grouping artifacts based on metadata can provide a volume reduction of at least $$ {\\raise0.7ex\\hbox{$1$} \\!\\mathord{\\left/ {\\vphantom {1 {2M}}}\\right.\\kern-0pt} \\!\\lower0.7ex\\hbox{${2M}$}} $$ 1 2 M , even on a single source, where M is the largest number of metadata associated with an artifact in that source. The value of M is independent of inherently available metadata on any given source. As one understands the underlying data better, one can further refine the value of M iteratively thereby enhancing the volume reduction capabilities. We also establish that such reduction in volume is independent of the distribution of metadata associations across artifacts in any given source. We systematically develop the algorithms necessary to group artifacts on an arbitrary collection of sources and study the complexity.", "title": "" }, { "docid": "5f77218388ee927565a993a8e8c48ef3", "text": "The paper presents an idea of Lexical Platform proposed as a means for a lightweight integration of various lexical resources into one complex (from the perspective of non-technical users). All LRs will be represented as software web components implementing a minimal set of predefined programming interfaces providing functionality for querying and generating simple common presentation format. A common data format for the resources will not be required. Users will be able to search, browse and navigate via resources on the basis of anchor elements of a limited set of types. Lexical resources linked to the platform via components will preserve their identity.", "title": "" }, { "docid": "ed4178ec9be6f4f8e87a50f0bf1b9a41", "text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.", "title": "" }, { "docid": "494030ce6b5294bf3ebdf2f89788230b", "text": "Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks.", "title": "" }, { "docid": "6f89c0f3f6590d32bd5e71ee876a65e2", "text": "Plant growth-promoting rhizobacteria (PGPR) are naturally occurring soil bacteria that aggressively colonize plant roots and benefit plants by providing growth promotion. Inoculation of crop plants with certain strains of PGPR at an early stage of development improves biomass production through direct effects on root and shoots growth. Inoculation of ornamentals, forest trees, vegetables, and agricultural crops with PGPR may result in multiple effects on early-season plant growth, as seen in the enhancement of seedling germination, stand health, plant vigor, plant height, shoot weight, nutrient content of shoot tissues, early bloom, chlorophyll content, and increased nodulation in legumes. PGPR are reported to influence the growth, yield, and nutrient uptake by an array of mechanisms. They help in increasing nitrogen fixation in legumes, help in promoting free-living nitrogen-fixing bacteria, increase supply of other nutrients, such as phosphorus, sulphur, iron and copper, produce plant hormones, enhance other beneficial bacteria or fungi, control fungal and bacterial diseases and help in controlling insect pests. There has been much research interest in PGPR and there is now an increasing number of PGPR being commercialized for various crops. Several reviews have discussed specific aspects of growth promotion by PGPR. In this review, we have discussed various bacteria which act as PGPR, mechanisms and the desirable properties exhibited by them.", "title": "" }, { "docid": "e165cac5eb7ad77b43670e4558011210", "text": "PURPOSE\nTo retrospectively review our experience in infants with glanular hypospadias or hooded prepuce without meatal anomaly, who underwent circumcision with the plastibell device. Although circumcision with the plastibell device is well described, there are no reported experiences pertaining to hooded prepuce or glanular hypospadias that have been operated on by this technique.\n\n\nMATERIALS AND METHODS\nBetween September 2002 and September 2008, 21 children with hooded prepuce (age 1 to 11 months, mean 4.6 months) were referred for hypospadias repair. Four of them did not have meatal anomaly. Their parents accepted this small anomaly and requested circumcision without glanuloplasty. In all cases, the circumcision was corrected by a plastibell device.\n\n\nRESULTS\nNo complications occurred in the circumcised patients, except delayed falling of bell in one case that was removed by a surgeon, after the tenth day.\n\n\nCONCLUSION\nCircumcision with the plastibell device is a suitable method for excision of hooded prepuce. It can also be used successfully in infants, who have miniglanular hypospadias, and whose parents accepted this small anomaly.", "title": "" }, { "docid": "db0d0348ae9cd4fa225629d154ed9501", "text": "In this paper, we present a systematic study for the detection of malicious applications (or apps) on popular Android Markets. To this end, we first propose a permissionbased behavioral footprinting scheme to detect new samples of known Android malware families. Then we apply a heuristics-based filtering scheme to identify certain inherent behaviors of unknown malicious families. We implemented both schemes in a system called DroidRanger. The experiments with 204, 040 apps collected from five different Android Markets in May-June 2011 reveal 211 malicious ones: 32 from the official Android Market (0.02% infection rate) and 179 from alternative marketplaces (infection rates ranging from 0.20% to 0.47%). Among those malicious apps, our system also uncovered two zero-day malware (in 40 apps): one from the official Android Market and the other from alternative marketplaces. The results show that current marketplaces are functional and relatively healthy. However, there is also a clear need for a rigorous policing process, especially for non-regulated alternative marketplaces.", "title": "" }, { "docid": "eaa37c0420dbc804eaf480d1167ad201", "text": "This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.", "title": "" }, { "docid": "b66609e66cc9c3844974b3246b8f737e", "text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …", "title": "" }, { "docid": "22bed4d5c38a096ae24a76dce7fc5136", "text": "BACKGROUND\nMedical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by existing metrics.\n\n\nRESULT\nFirst we present an overview of 20 evaluation metrics selected based on a comprehensive literature review. For fuzzy segmentation, which shows the level of membership of each voxel to multiple classes, fuzzy definitions of all metrics are provided. We present a discussion about metric properties to provide a guide for selecting evaluation metrics. Finally, we propose an efficient evaluation tool implementing the 20 selected metrics. The tool is optimized to perform efficiently in terms of speed and required memory, also if the image size is extremely large as in the case of whole body MRI or CT volume segmentation. An implementation of this tool is available as an open source project.\n\n\nCONCLUSION\nWe propose an efficient evaluation tool for 3D medical image segmentation using 20 evaluation metrics and provide guidelines for selecting a subset of these metrics that is suitable for the data and the segmentation task.", "title": "" }, { "docid": "7768c834a837d8f02ce91c4949f87d59", "text": "Gamified systems benefit from various gamification-elements to motivate users and encourage them to persist in their quests towards a goal. This paper proposes a categorization of gamification-elements and learners' motivation type to enrich a learning management system with the advantages of personalization and gamification. This categorization uses the learners' motivation type to assign gamification-elements in learning environments. To find out the probable relations between gamification-elements and learners' motivation type, a field-research is done to measure learners' motivation along with their interests in gamification-elements. Based on the results of this survey, all the gamification-elements are categorized according to related motivation types, which form our proposed categorization. To investigate the effects of this personalization approach, a gamified learning management system is prepared. Our implemented system is evaluated in Technical English course at University of Tehran. Our experimental results on the average participation rate show the effectiveness of the personalization approach on the learners' motivation. Based on the paper findings, we suggest an integrated categorization of gamification-elements and learners' motivation type, which can further enhance the learners' motivation through personalization.", "title": "" }, { "docid": "610476babafbf2785ace600ed409638c", "text": "In the utility grid interconnection of photovoltaic (PV) energy sources, inverters determine the overall system performance, which result in the demand to route the grid connected transformerless PV inverters (GCTIs) for residential and commercial applications, especially due to their high efficiency, light weight, and low cost benefits. In spite of these benefits of GCTIs, leakage currents due to distributed PV module parasitic capacitances are a major issue in the interconnection, as they are undesired because of safety, reliability, protective coordination, electromagnetic compatibility, and PV module lifetime issues. This paper classifies the kW and above range power rating GCTI topologies based on their leakage current attributes and investigates and/illustrates their leakage current characteristics by making use of detailed microscopic waveforms of a representative topology of each class. The cause and quantity of leakage current for each class are identified, not only providing a good understanding, but also aiding the performance comparison and inverter design. With the leakage current characteristic investigation, the study places most topologies under small number of classes with similar leakage current attributes facilitating understanding, evaluating, and the design of GCTIs. Establishing a clear relation between the topology type and leakage current characteristic, the topology families are extended with new members, providing the design engineers a variety of GCTI topology configurations with different characteristics.", "title": "" }, { "docid": "bb483dd62b4b104b0314914557a0ae4b", "text": "At a recent Reddit AMA (Ask Me Anything), Emmett Shear, CEO of Twitch.tv, the leading live video platform for streaming games, claimed that in a decade e-sports will be bigger than athletic sports [7]. While his statement was both hyperbolic and speculative, the particulars were not: e-sports tournaments have spectator numbers in the millions, recent franchise games have logged over a billion hours of gameplay, while experts and amateur e-sports enthusiasts alike regularly broadcast and share their competitive play online [1, 4, 6, 8, 9, 10]. The growing passion for mainstream e-sports is apparent, though there are also interesting, less visible happenings on the periphery of the e-sports media industry - notably, the acts of life and death that happen off the polished main stage. Smaller tournaments have been cut to make way for major e-sports franchises [11]; games with a strong culture of dark play have attempted to encourage esport iterations, encountering conflict where bribery and espionage is interwoven with traditional sporting structures [2]; and third party organizations have created new ways to watch, participate, celebrate, but also profit from one's love of games [3]. In these actions, we find some of the ways in which competitive games and gaming lifestyles are extended, but also often dissolved from the main stages of e-sports. At a broader level, these events allow us to witness the growth and sedimentation of this new socio-technical form. Simultaneously, we observe its erosion as the practices and form of e-sports are subject to the compromises demanded by processes of cultural and audience reception, and attempts to maximise cultural appeal and commercial success. It is in the interplay between this ceaseless growth and erosion that the significance of e-sport can be found. E-sport represents a rare opportunity to observe the historical emergence of interactive gaming in a sporting 'skin', as well as new forms of sports-like competition realised through interactive gaming platforms. The position of this panel moves beyond questions of (sports) disciplinary rivalry to consider how e-sports extend our understanding of sports and competitive games more broadly. Drawing on qualitative studies, theoretical considerations, and practical work, this panel explores the tensions, but also the new \"sporting\" possibilities which emerge in moments of transition -- between the life and death of a tournament, the extension of spectatorship opportunities, the construction of a competitive gaming scene, and the question of how to best conceptualise e-sport at the intersection of gaming and sport.", "title": "" }, { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" }, { "docid": "129efeb93aad31aca7be77ef499398e2", "text": "Using a Neonatal Intensive Care Unit (NICU) case study, this work investigates the current CRoss Industry Standard Process for Data Mining (CRISP-DM) approach for modeling Intelligent Data Analysis (IDA)-based systems that perform temporal data mining (TDM). The case study highlights the need for an extended CRISP-DM approach when modeling clinical systems applying Data Mining (DM) and Temporal Abstraction (TA). As the number of such integrated TA/DM systems continues to grow, this limitation becomes significant and motivated our proposal of an extended CRISP-DM methodology to support TDM, known as CRISP-TDM. This approach supports clinical investigations on multi-dimensional time series data. This research paper has three key objectives: 1) Present a summary of the extended CRISP-TDM methodology; 2) Demonstrate the applicability of the proposed model to the NICU data, focusing on the challenges associated with multi-dimensional time series data; and 3) Describe the proposed IDA architecture for applying integrated TDM.", "title": "" }, { "docid": "4da68af0db0b1e16f3597c8820b2390d", "text": "We study the task of verifiable delegation of computation on encrypted data. We improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. In this strong model, we construct a scheme for arbitrary computations and highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset on a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted.\n As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead which would occur with a naive approach. We support our theoretical constructions with extensive implementation tests that show the practical feasibility of our schemes.", "title": "" } ]
scidocsrr
20406de38473c608f3b3ecef8439e0be
Two factor authentication using mobile phones
[ { "docid": "e33dd6cf660a2286c51946de4cc641a0", "text": "User authentication in computing systems traditionally depends on three factors: something you have (e.g., a hardware token), something you are (e.g., a fingerprint), and something you know (e.g., a password). In this paper, we explore a fourth factor, the social network of the user, that is, somebody you know.Human authentication through mutual acquaintance is an age-old practice. In the arena of computer security, it plays roles in privilege delegation, peer-level certification, help-desk assistance, and reputation networks. As a direct means of logical authentication, though, the reliance of human being on another has little supporting scientific literature or practice.In this paper, we explore the notion of vouching, that is, peer-level, human-intermediated authentication for access control. We explore its use in emergency authentication, when primary authenticators like passwords or hardware tokens become unavailable. We describe a practical, prototype vouching system based on SecurID, a popular hardware authentication token. We address traditional, cryptographic security requirements, but also consider questions of social engineering and user behavior.", "title": "" } ]
[ { "docid": "21384ea8d80efbf2440fb09a61b03be2", "text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.", "title": "" }, { "docid": "933af670e35c8271a483f795cadf62f9", "text": "We perform modal analysis of short-term swing dynamics in multi-machine power systems. The analysis is based on the so-called Koopman operator, a linear, infinite-dimensional operator that is defined for any nonlinear dynamical system and captures full information of the system. Modes derived through spectral analysis of the Koopman operator, called Koopman modes, provide a nonlinear extension of linear oscillatory modes. Computation of the Koopman modes extracts single-frequency, spatial modes embedded in non-stationary data of short-term, nonlinear swing dynamics, and it provides a novel technique for identification of coherent swings and machines.", "title": "" }, { "docid": "dd0bbc039e1bbc9e36ffe087e105cf56", "text": "Using a comparative analysis approach, this article examines the development, characteristics and issues concerning the discourse of modern Asian art in the twentieth century, with the aim of bringing into picture the place of Asia in the history of modernism. The wide recognition of the Western modernist canon as centre and universal displaces the contribution and significance of the non-Western world in the modern movement. From a cross-cultural perspective, this article demonstrates that modernism in the field of visual arts in Asia, while has had been complex and problematic, nevertheless emerged. Rather than treating Asian art as a generalized subject, this article argues that, with their subtly different notions of culture, identity and nationhood, the modernisms that emerged from various nations in this region are diverse and culturally specific. Through the comparison of various art-historical contexts in this region (namely China, India, Japan and Korea), this article attempts to map out some similarities as well as differences in their pursuit of an autonomous modernist representation.", "title": "" }, { "docid": "1e77561120fd88f86cdd68d64a8ebd58", "text": "Climate warming has created favorable conditions for the range expansion of many southern Ponto-Caspian freshwater fish and mollusks through the Caspian-Volga-Baltic “invasion corridor.” Some parasites can be used as “biological tags” of migration activity and generic similarity of new host populations in the Middle and Upper Volga. The study demonstrates a low biodiversity of parasites even of the most common estuarial invaders sampled from the northern reservoir such as the Ponto-Caspian kilka Clupeonella cultriventris (16 species), tubenose goby Proterorhinus semilunaris (19 species), and round goby Neogobius (=Appollonia) malanostomus (14 species). In 2000–2010, only a few cases of a significant increase in occurrence (up to 80–100%) and abundance indexes were recorded for some nonspecific parasites such as peritricha ciliates Epistilys lwoffi, Trichodina acuta, and Ambiphrya ameiuri on the gills of the tubenose goby; the nematode Contracoecum microcephalum and the acanthocephalan Pomphorhynchus laevis from the round goby; and metacercariae of trematodes Bucaphalus polymorphus and Apophallus muehlingi from the muscles of kilka. In some water bodies, the occurrence of the trematode Bucephalus polymorphus tended to decrease after a partial replacement of its intermediate host zebra mussel Dreissena polymorpha by D. bugensi (quagga mussel). High occurrence of parthenites of Apophallus muehlingi in the mollusk Lithoglyphus naticoides was recorded in the Upper Volga (up to 70%) as compared to the Middle Volga (34%). Fry of fish with a considerable degree of muscle injury caused by the both trematode species have lower mobility and become more available food objects for birds and carnivorous fish.", "title": "" }, { "docid": "29863e27e2caf8d2e7a2db7ffa8e1bf8", "text": "Importance\nImproving emergency care of pediatric sepsis is a public health priority, but optimal early diagnostic approaches are unclear. Measurement of lactate levels is associated with improved outcomes in adult septic shock, but pediatric guidelines do not endorse its use, in part because the association between early lactate levels and mortality is unknown in pediatric sepsis.\n\n\nObjective\nTo determine whether the initial serum lactate level is associated with 30-day mortality in children with suspected sepsis.\n\n\nDesign, Setting, and Participants\nThis observational cohort study of a clinical registry of pediatric patients with suspected sepsis in the emergency department of a tertiary children's hospital from April 1, 2012, to December 31, 2015, tested the hypothesis that a serum lactate level of greater than 36 mg/dL is associated with increased mortality compared with a serum lactate level of 36 mg/dL or less. Consecutive patients with sepsis were identified and included in the registry following consensus guidelines for clinical recognition (infection and decreased mental status or perfusion). Among 2520 registry visits, 1221 were excluded for transfer from another medical center, no measurement of lactate levels, and patients younger than 61 days or 18 years or older, leaving 1299 visits available for analysis. Lactate testing is prepopulated in the institutional sepsis order set but may be canceled at clinical discretion.\n\n\nExposures\nVenous lactate level of greater than 36 mg/dL on the first measurement within the first 8 hours after arrival.\n\n\nMain Outcomes and Measures\nThirty-day in-hospital mortality was the primary outcome. Odds ratios were calculated using logistic regression to account for potential confounders.\n\n\nResults\nOf the 1299 patients included in the analysis (753 boys [58.0%] and 546 girls [42.0%]; mean [SD] age, 7.3 [5.3] years), 899 (69.2%) had chronic medical conditions and 367 (28.3%) had acute organ dysfunction. Thirty-day mortality occurred in 5 of 103 patients (4.8%) with lactate levels greater than 36 mg/dL and 20 of 1196 patients (1.7%) with lactate levels of 36 mg/dL or less. Initial lactate levels of greater than 36 mg/dL were significantly associated with 30-day mortality in unadjusted (odds ratio, 3.00; 95% CI, 1.10-8.17) and adjusted (odds ratio, 3.26; 95% CI, 1.16- 9.16) analyses. The sensitivity of lactate levels greater than 36 mg/dL for 30-day mortality was 20.0% (95% CI, 8.9%-39.1%), and specificity was 92.3% (90.7%-93.7%).\n\n\nConclusions and Relevance\nIn children treated for sepsis in the emergency department, lactate levels greater than 36 mg/dL were associated with mortality but had a low sensitivity. Measurement of lactate levels may have utility in early risk stratification of pediatric sepsis.", "title": "" }, { "docid": "813977850636d545d946503fa09c47ba", "text": "In this paper we discuss the opportunities and challenges of the recently introduced Lean UX software development philosophy. The point of view is product design and development in a software agency. Lean UX philosophy is identified by three ingredients: design thinking, Lean production and Agile development. The major challenge for an agency is the organizational readiness of the client organization to adopt a new way of working. Rather than any special tool or practice, we see that the renewal of user-centered design and development is hindered by existing purchase processes and slow decision making patterns.", "title": "" }, { "docid": "04953f3a55a77b9a35e7cea663c6387e", "text": "-This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization", "title": "" }, { "docid": "869e01855c8cfb9dc3e64f7f3e73cd60", "text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.", "title": "" }, { "docid": "9db43e2773f0c617d74abab3a661e74b", "text": "It has been indicated that proton pump inhibitor (PPI) use is associated with a loss of the anti-fracture efficacy of alendronate (AD). However, there are few prospective studies that have investigated the efficacy of AD on lumbar bone mineral density (BMD) in osteoporotic patients who are using PPIs. Thus, the aim of the present study was to investigate the efficacy of alfacalcidol (AC) and AD on lumbar BMD in osteoporotic patients using PPIs. A prospective, randomized, active control study enrolled such osteoporotic patients (age, ≥50 years). The patients were randomly assigned to receive AC (1 µg/day) or AD (35 mg/week) and were followed up for one year. Patient profiles were maintained, and lumbar BMD, bone-specific alkaline-phosphatase (BAP) and collagen type-I cross-linked N-telopeptide (NTX), upper gastrointestinal endoscopy results, and the frequency scale for the symptoms of gastroesophageal reflux disease (FSSG) were evaluated. Percentage changes in lumbar BMD, NTX, BAP, and change in FSSG score from baseline to the end of one year of treatment were investigated. Sixteen patients were eligible for analysis (eight assigned to receive AC, eight assigned to receive AD). The percentage change in lumbar BMD from baseline to the end of treatment was -0.4±4.0% for the AC group vs. 6.8±6.3% for the AD group (P=0.015). No significant percentage change of BAP and NTX between the two groups was observed. Subsequent to one year of treatment, the FSSG score did not change from the baseline values for either study group, and no new bone fractures or esophagitis were observed in either group of patients. The findings demonstrated that in osteoporotic patients using concomitant PPIs, there was a greater increase in lumbar BMD after one year of treatment with AD compared with AC. However, the number of study subjects was small; thus, further, large prospective studies are required to determine the effect of AD in osteoporotic patients using concomitant PPIs.", "title": "" }, { "docid": "12c3f5a20fd197e96cd03fa2ff03a81a", "text": "Topic Detection and Tracking (TDT) is an important research topic in data mining and information retrieval and has been explored for many years. Most of the studies have approached the problem from the event tracking point of view. We argue that the definition of stories as events is not reflecting the full picture. In this work we propose a story tracking method built on crowd-tagging in social media, where news articles are labeled with hashtags in real-time. The social tags act as rich meta-data for news articles, with the advantage that, if carefully employed, they can capture emerging concepts and address concept drift in a story. We present an approach for employing social tags for the purpose of story detection and tracking and show initial empirical results. We compare our method to classic keyword query retrieval and discuss an example of story tracking over time.", "title": "" }, { "docid": "1c59045d59366bf9fccea077f3f28850", "text": "Convolutional neural networks (CNN) have achieved state of the art performance on both classification and segmentation tasks. Applying CNNs to microscopy images is challenging due to the lack of datasets labeled at the single cell level. We extend the application of CNNs to microscopy image classification and segmentation using multiple instance learning (MIL). We present the adaptive Noisy-AND MIL pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using full resolution microscopy images with global labels. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. We show that training MIL CNNs end-to-end outperforms several previous methods on both mammalian and yeast microscopy images without requiring any segmentation steps.", "title": "" }, { "docid": "d67d13bde6c6342c66b793fc87f6cdf5", "text": "A set of visual search experiments tested the proposal that focused attention is needed to detect change. Displays were arrays of rectangles, with the target being the item that continually changed its orientation or contrast polarity. Five aspects of performance were examined: linearity of response, processing time, capacity, selectivity, and memory trace. Detection of change was found to be a self-terminating process requiring a time that increased linearly with the number of items in the display. Capacity for orientation was found to be about five items, a value comparable to estimates of attentional capacity. Observers were able to filter out both static and dynamic variations in irrelevant properties. Analysis also indicated a memory for previously attended locations. These results support the hypothesis that the process needed to detect change is much the same as the attentional process needed to detect complex static patterns. Interestingly, the features of orientation and polarity were found to be handled in somewhat different ways. Taken together, these results not only provide evidence that focused attention is needed to see change, but also show that change detection itself can provide new insights into the nature of attentional processing.", "title": "" }, { "docid": "2271dd42ca1f9682dc10c9832387b55f", "text": "People who score low on a performance test overestimate their own performance relative to others, whereas high scorers slightly underestimate their own performance. J. Kruger and D. Dunning (1999) attributed these asymmetric errors to differences in metacognitive skill. A replication study showed no evidence for mediation effects for any of several candidate variables. Asymmetric errors were expected because of statistical regression and the general better-than-average (BTA) heuristic. Consistent with this parsimonious model, errors were no longer asymmetric when either regression or the BTA effect was statistically removed. In fact, high rather than low performers were more error prone in that they were more likely to neglect their own estimates of the performance of others when predicting how they themselves performed relative to the group.", "title": "" }, { "docid": "7286da6597d34e834a89f343e145cbcf", "text": "Wireless sensor network (WSN) technologies are considered one of the key research areas in computer science and the healthcare application industries for improving the quality of life. The purpose of this paper is to provide a snapshot of current developments and future direction of research on wearable and implantable body area network systems for continuous monitoring of patients. This paper explains the important role of body sensor networks in medicine to minimize the need for caregivers and help the chronically ill and elderly people live an independent life, besides providing people with quality care. The paper provides several examples of state of the art technology together with the design considerations like unobtrusiveness, scalability, energy efficiency, security and also provides a comprehensive analysis of the various benefits and drawbacks of these systems. Although offering significant benefits, the field of wearable and implantable body sensor networks still faces major challenges and open research problems which are investigated and covered, along with some proposed solutions, in this paper.", "title": "" }, { "docid": "b8d1f738156d7db065d79b0e26b6d9fb", "text": "BLAKE is our proposal for SHA-3. BLAKE entirely relies on previously analyzed components: it uses the HAIFA iteration mode and builds its compression function on the ChaCha core function. BLAKE resists generic second-preimage attacks, length extension, and sidechannel attacks. Theoretical and empirical security guarantees are given, against structural and differential attacks. BLAKE hashes on a Core 2 Duo at 12 cycles/byte, and on a 8-bit PIC microcontroller at 400 cycles/byte. In hardware BLAKE can be implemented in less than 9900 gates, and reaches a throughput of 6 Gbps. FHNW, Windisch, Switzerland, jeanphilippe.aumasson@gmail.com ETHZ, Zürich, Switzerland, henzen@iis.ee.ethz.ch FHNW, Windisch, Switzerland, willi.meier@fhnw.ch Loughborough University, UK, r.phan@lboro.ac.uk", "title": "" }, { "docid": "2665314258f4b7f59a55702166f59fcc", "text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.", "title": "" }, { "docid": "5bad1968438d28f7f33518a869d0a85b", "text": "Cloud data centers host diverse applications, mixing in the same network a plethora of workflows that require small predictable latency with others requiring large sustained throughput. In this environment, today’s state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal network impairments, such as queue buildup, buffer pressure, and incast, that lead to high application latencies. Using these insights, propose a variant of TCP, DCTCP, for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) and a simple multibit feedback mechanism at the host. We evaluate DCTCP at 1 and 10Gbps speeds, through benchmark experiments and analysis. In the data center, operating with commodity, shallow buffered switches, DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, it also provides hight burst tolerance and low latency for short flows. While TCP’s limitations cause our developers to restrict the traffic they send today, using DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.", "title": "" }, { "docid": "bc7fc7c69813338406d4e4b1828498fe", "text": "The task of generating natural images from 3D scenes has been a long standing goal in computer graphics. On the other hand, recent developments in deep neural networks allow for trainable models that can produce natural-looking images with little or no knowledge about the scene structure. While the generated images often consist of realistic looking local patterns, the overall structure of the generated images is often inconsistent. In this work we propose a trainable, geometry-aware image generation method that leverages various types of scene information, including geometry and segmentation, to create realistic looking natural images that match the desired scene structure. Our geometrically-consistent image synthesis method is a deep neural network, called Geometry to Image Synthesis (GIS) framework, which retains the advantages of a trainable method, e.g., differentiability and adaptiveness, but, at the same time, makes a step towards the generalizability, control and quality output of modern graphics rendering engines. We utilize the GIS framework to insert vehicles in outdoor driving scenes, as well as to generate novel views of objects from the Linemod dataset. We qualitatively show that our network is able to generalize beyond the training set to novel scene geometries, object shapes and segmentations. Furthermore, we quantitatively show that the GIS framework can be used to synthesize large amounts of training data which proves beneficial for training instance segmentation models.", "title": "" }, { "docid": "ae3fb9d4ea2902165a364cfc6fd15b84", "text": "We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.", "title": "" }, { "docid": "ea27bbd203e1f7897e33e1a12fd10221", "text": "Hierarchies represent substantial part of the multidimensional view of data based on exploring measures of facts for business or non-business domain along various dimensions. In data warehousing and on-line analytical processing they provide for examining data at different levels of detail. Several types of hierarchies have been presented with issues concerning dependencies and summarizability of data along the levels. Design mechanism for implementation of dimensional hierarchies in datawarehouse logical scheme has been proposed. Algorithms for enforcing dependencies on dimensional hierarchies for achieving correct summarizability of data have been developed. An implementation of the algorithms as procedures in logical scheme's metadata has been presented.", "title": "" } ]
scidocsrr
7588c908d71909c9a2abd7cafee31d43
Towards User-Oriented RBAC Model
[ { "docid": "a22334f024e3cfa1c3fafea45b06199d", "text": "A decomposition of a binary matrix into two matrices gives a set of basis vectors and their appropriate combination to form the original matrix. Such decomposition solutions are useful in a number of application domains including text mining, role engineering as well as knowledge discovery. While a binary matrix can be decomposed in several ways, however, certain decompositions better characterize the semantics associated with the original matrix in a succinct but comprehensive way. Indeed, one can find different decompositions optimizing different criteria matching various semantics. In this paper, we first present a number of variants to the optimal Boolean matrix decomposition problem that have pragmatic implications. We then present a unified framework for modeling the optimal binary matrix decomposition and its variants using binary integer programming. Such modeling allows us to directly adopt the huge body of heuristic solutions and tools developed for binary integer programming. Although the proposed solutions are applicable to any domain of interest, for providing more meaningful discussions and results, in this paper, we present the binary matrix decomposition problem in a role engineering context, whose goal is to discover an optimal and correct set of roles from existing permissions, referred to as the role mining problem (RMP). This problem has gained significant interest in recent years as role based access control has become a popular means of enforcing security in databases. We consider several variants of the above basic RMP, including the min-noise RMP, delta-approximate RMP and edge-RMP. Solutions to each of them aid security administrators in specific scenarios. We then model these variants as Boolean matrix decomposition and present efficient heuristics to solve them.", "title": "" }, { "docid": "b18261d40726ad4b4c950f86ad19293a", "text": "The role mining problem has received considerable attention recently. Among the many solutions proposed, the Boolean matrix decomposition (BMD) formulation has stood out, which essentially discovers roles by decomposing the binary matrix representing user-to-permission assignment (UPA) into two matrices-user-to-role assignment (UA) and permission-to-role assignment (PA). However, supporting certain embedded constraints, such as separation of duty (SoD) and exceptions, is critical to the role mining process. Otherwise, the mined roles may not capture the inherent constraints of the access control policies of the organization. None of the previously proposed role mining solutions, including BMD, take into account these underlying constraints while mining. In this paper, we extend the BMD so that it reflects such embedded constraints by proposing to allow negative permissions in roles or negative role assignments for users. Specifically, by allowing negative permissions in roles, we are often able to use less roles to reconstruct the same given user-permission assignments. Moreover, from the resultant roles we can discover underlying constraints such as separation of duty constraints. This feature is not supported by any existing role mining approaches. Hence, we call the role mining problem with negative authorizations the constraint-aware role mining problem (CRM). We also explore other interesting variants of the CRM, which may occur in real situations. To enable CRM and its variants, we propose a novel approach, extended Boolean matrix decomposition (EBMD), which addresses the ineffectiveness of BMD in its ability of capturing underlying constraints. We analyze the computational complexity for each of CRM variants and present heuristics for problems that are proven to be NP-hard.", "title": "" } ]
[ { "docid": "be647af4c1d8821ded30995e2c6c7c8b", "text": "p53 plays an important role in regulating mitochondrial homeostasis. However, it is unknown whether p53 is required for the physiological and mitochondrial adaptations with exercise training. Furthermore, it is also unknown whether impairments in the absence of p53 are a result of its loss in skeletal muscle, or a secondary effect due to its deletion in alternative tissues. Thus, we investigated the role of p53 in regulating mitochondria both basally, and under the influence of exercise, by subjecting C57Bl/6J whole-body (WB) and muscle-specific p53 knockout (mKO) mice to a 6-week training program. Our results confirm that p53 is important for regulating mitochondrial content and function, as well as proteins within the autophagy and apoptosis pathways. Despite an increased proportion of phosphorylated p53 (Ser15) in the mitochondria, p53 is not required for training-induced adaptations in exercise capacity or mitochondrial content and function. In comparing mouse models, similar directional alterations were observed in basal and exercise-induced signaling modifications in WB and mKO mice, however the magnitude of change was less pronounced in the mKO mice. Our data suggest that p53 is required for basal mitochondrial maintenance in skeletal muscle, but is not required for the adaptive responses to exercise training.", "title": "" }, { "docid": "80a9489262ee8d94d64dd8e475c060a3", "text": "The effects of social-cognitive variables on preventive nutrition and behavioral intentions were studied in 580 adults at 2 points in time. The authors hypothesized that optimistic self-beliefs operate in 2 phases and made a distinction between action self-efficacy (preintention) and coping self-efficacy (postintention). Risk perceptions, outcome expectancies, and action self-efficacy were specified as predictors of the intention at Wave 1. Behavioral intention and coping self-efficacy served as mediators linking the 3 predictors with low-fat and high-fiber dietary intake 6 months later at Wave 2. Covariance structure analysis yielded a good model fit for the total sample and 6 subsamples created by a median split of 3 moderators: gender, age, and body weight. Parameter estimates differed between samples; the importance of perceived self-efficacy increased with age and weight.", "title": "" }, { "docid": "89e0687a467c2e026e40b6bd5633e09a", "text": "Secure two-party computation enables two parties to evaluate a function cooperatively without revealing to either party anything beyond the function’s output. The garbled-circuit technique, a generic approach to secure two-party computation for semi-honest participants, was developed by Yao in the 1980s, but has been viewed as being of limited practical significance due to its inefficiency. We demonstrate several techniques for improving the running time and memory requirements of the garbled-circuit technique, resulting in an implementation of generic secure two-party computation that is significantly faster than any previously reported while also scaling to arbitrarily large circuits. We validate our approach by demonstrating secure computation of circuits with over 109 gates at a rate of roughly 10 μs per garbled gate, and showing order-of-magnitude improvements over the best previous privacy-preserving protocols for computing Hamming distance, Levenshtein distance, Smith-Waterman genome alignment, and AES.", "title": "" }, { "docid": "8dbddd1ebb995ec4b2cc5ad627e91f61", "text": "Pac-Man (and variant) computer games have received some recent attention in artificial intelligence research. One reason is that the game provides a platform that is both simple enough to conduct experimental research and complex enough to require non-trivial strategies for successful game-play. This paper describes an approach to developing Pac-Man playing agents that learn game-play based on minimal onscreen information. The agents are based on evolving neural network controllers using a simple evolutionary algorithm. The results show that neuroevolution is able to produce agents that display novice playing ability, with a minimal amount of onscreen information, no knowledge of the rules of the game and a minimally informative fitness function. The limitations of the approach are also discussed, together with possible directions for extending the work towards producing better Pac-Man playing agents", "title": "" }, { "docid": "7dc54a5750832bc503e77d2893466979", "text": "Functional logic programming languages combine the most important declarative programming paradigms, and attempts to combine these paradigms have a long history. The declarative multi-paradigm language Curry is influenced by recent advances in the foundations and implementation of functional logic languages. The development of Curry is an international initiative intended to provide a common platform for the research, teaching, and application of integrated functional logic languages. This paper surveys the foundations of functional logic programming that are relevant for Curry, the main features of Curry, and extensions and applications of Curry and functional logic programming.", "title": "" }, { "docid": "8f99bf256228119ea220e2e22c19cd6f", "text": "A Wi-Fi wireless platform with embedded Linux web server and its integration into a network of sensor nodes for building automation and industrial automation is implemented here. In this system focus is on developing an ESP8266 based Low cost Wi-Fi based wireless sensor network, the IEEE 802.11n protocol is used for system. In most of the existing wireless sensor network are designed based on ZigBee and RF. The pecking order of the system is such that the lowest level is that of the sensors, the in-between level is the controllers, and the highest level is a supervisory node. The supervisor can be react as an active or passive. The system is shown to permit all achievable controller failure scenarios. The supervisor can handle the entire control load of all controllers, should the need arise. An integrated system platform which can provide Linux web server, database, and PHP run-time environment was built by using ARM Linux development board with Apache+PHP+SQLite3. Various Internet accesses were offered by using Wi-Fi wireless networks communication technology. Raspberry Pi use as a main server in the system and which connects the sensor nodes via Wi-Fi in the wireless sensor network and collects sensors data from different sensors, and supply multi-clients services including data display through an Embedded Linux based Web-Server.", "title": "" }, { "docid": "f462cb7fb501c561dea600ca6e815ff2", "text": "This study assessed the role of rape myth acceptance (RMA) and situational factors in the perception of three different rape scenarios (date rape, marital rape, and stranger rape). One hundred and eighty-two psychology undergraduates were asked to emit four judgements about each rape situation: victim responsibility, perpetrator responsibility, intensity of trauma, and likelihood to report the crime to the police. It was hypothesized that neither RMA nor situational factors alone can explain how rape is perceived; it is the interaction between these two factors that best account for social reactions to sexual aggression. The results generally supported the authors' hypothesis: Victim blame, estimation of trauma, and the likelihood of reporting the crime to the police were best explained by the interaction between observer characteristics, such as RMA, and situational clues. That is, the less stereotypic the rape situation was, the greater was the influence of attitudes toward rape on attributions.", "title": "" }, { "docid": "0e2b24f697aa7920713b173eea0f8ab0", "text": "Multilabel classification is a central problem in many areas of data analysis, including text and multimedia categorization, where individual data objects need to be assigned multiple labels. A key challenge in these tasks is to learn a classifier that can properly exploit label correlations without requiring exponential enumeration of label subsets during training or testing. We investigate novel loss functions for multilabel training within a large margin framework—identifying a simple alternative that yields improved generalization while still allowing efficient training. We furthermore show how covariances between the label models can be learned simultaneously with the classification model itself, in a jointly convex formulation, without compromising scalability. The resulting combination yields state of the art accuracy in multilabel webpage classification.", "title": "" }, { "docid": "45bcb05a04dd86831c89b630973172a6", "text": "In the challenging area of Digital image processing, the image is degraded by noise. A large number of image de-noising techniques are proposed to remove noise from the images. These techniques depend on the type of noise present in the images. This paper focuses on the techniques proposed to remove the salt and pepper noise. Some of the de-noising techniques are Mean Filter, Median Filter, Mean-Median filter, Weighted Median Filter (WMF), Standard Median Filter (SMF), SUper Mean Filter (SUMF), Decision Based Median Filter (DBMF). Among the existing de-noising techniques Weighted median Filter provides satisfactory results in de-noising the image by preserving the edges for the high level noise.", "title": "" }, { "docid": "7e40c7145f4613f12e7fc13646f3927c", "text": "One strategy for intelligent agents in order to reach their goals is to plan their actions in advance. This can be done by simulating how the agent’s actions affect the environment and how it evolves independently of the agent. For this simulation, a model of the environment is needed. However, the creation of this model might be labor-intensive and it might be computational complex to evaluate during simulation. That is why, we suggest to equip an intelligent agent with a learned intuition about the dynamics of its environment by utilizing the concept of intuitive physics. To demonstrate our approach, we used an agent that can freely move in a two dimensional floor plan. It has to collect moving targets while avoiding the collision with static and dynamic obstacles. In order to do so, the agent plans its actions up to a defined planning horizon. The performance of our agent, which intuitively estimates the dynamics of its surrounding objects based on artificial neural networks, is compared to an agent which has a physically exact model of the world and one that acts randomly. The evaluation shows comparatively good results for the intuition based agent considering it uses only a quarter of the computation time in comparison to the agent with a physically exact model.", "title": "" }, { "docid": "edbbf1491e552346d42d39ebf90fc9fc", "text": "The use of ICT in the classroom is very important for providing opportunities for students to learn to operate in an information age. Studying the obstacles to the use of ICT in education may assist educators to overcome these barriers and become successful technology adopters in the future. This paper provides a meta-analysis of the relevant literature that aims to present the perceived barriers to technology integration in science education. The findings indicate that teachers had a strong desire for to integrate ICT into education; but that, they encountered many barriers. The major barriers were lack of confidence, lack of competence, and lack of access to resources. Since confidence, competence and accessibility have been found to be the critical components of technology integration in schools, ICT resources including software and hardware, effective professional development, sufficient time, and technical support need to be provided to teachers. No one component in itself is sufficient to provide good teaching. However, the presence of all components increases the possibility of excellent integration of ICT in learning and teaching opportunities. Generally, this paper provides information and recommendation to those responsible for the integration of new technologies into science education.", "title": "" }, { "docid": "acb41ecca590ed8bc53b7af46a280daf", "text": "We consider the problem of state estimation for a dynamic system driven by unobserved, correlated inputs. We model these inputs via an uncertain set of temporally correlated dynamic models, where this uncertainty includes the number of modes, their associated statistics, and the rate of mode transitions. The dynamic system is formulated via two interacting graphs: a hidden Markov model (HMM) and a linear-Gaussian state space model. The HMM's state space indexes system modes, while its outputs are the unobserved inputs to the linear dynamical system. This Markovian structure accounts for temporal persistence of input regimes, but avoids rigid assumptions about their detailed dynamics. Via a hierarchical Dirichlet process (HDP) prior, the complexity of our infinite state space robustly adapts to new observations. We present a learning algorithm and computational results that demonstrate the utility of the HDP for tracking, and show that it efficiently learns typical dynamics from noisy data.", "title": "" }, { "docid": "1350f4e274947881f4562ab6596da6fd", "text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.", "title": "" }, { "docid": "eadc810575416fccea879c571ddfbfd2", "text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. A key observation is that it is difficult to classify anchors of different sizes with the same set of features. Anchors of different sizes should be placed accordingly based on different depth within a network: smaller boxes on high-resolution layers with a smaller stride while larger boxes on low-resolution counterparts with a larger stride. Inspired by the conv/deconv structure, we fully leverage the low-level local details and high-level regional semantics from two feature map streams, which are complimentary to each other, to identify the objectness in an image. A map attention decision (MAD) unit is further proposed to aggressively search for neuron activations among two streams and attend the most contributive ones on the feature learning of the final loss. The unit serves as a decision-maker to adaptively activate maps along certain channels with the solely purpose of optimizing the overall training loss. One advantage of MAD is that the learned weights enforced on each feature channel is predicted on-the-fly based on the input context, which is more suitable than the fixed enforcement of a convolutional kernel. Experimental results on three datasets demonstrate the effectiveness of our proposed algorithm over other state-of-the-arts, in terms of average recall for region proposal and average precision for object detection.", "title": "" }, { "docid": "1ff9285909cb1363e047777f3531aed8", "text": "Central inverters based on conventional topologies are the current preferred solution in solar farms because of their low cost and simplicity. However, such topologies have some disadvantages as poor maximum power tracking, use of bulky filters and low frequency transformers. A good alternative in this case is the SiC-based Cascaded Multilevel Converter (CMC), which provides a distributed MPPT control with reduced footprint and high flexibility. Each cell of a CMC usually has as an intermediate stage a solid-state transformer based on a Dual-Active-Bridge (DAB) DC-DC Converter. Due to the unidirectional power flow characteristic of the photovoltaic application and aiming further reduction in the converter footprint, this work proposes a Forward Dual-Active-Bridge (F-DAB) topology, which reduces the number of active switches. This paper shows through analytical, simulation and experimental results that the cell using an F-DAB is superior to other unidirectional topologies in two aspects: greater power density with the available power modules and simplicity of control.", "title": "" }, { "docid": "3b8e5fac9b2a2be74ad59f89c7152b44", "text": "Many previous papers have lamented the fact that the findings of past GSS research have been inconsistent. This paper develops a new model for interpreting GSS effects on performance (a Fit-Appropriation Model), which argues that GSS performance is affected by two factors. The first is the fit between the task and the GSS structures selected for use (i.e., communication support and information processing support). The second is the appropriation support the group receives in the form of training, facilitation, and software restrictiveness to help them effectively incorporate the selected GSS structures into their meeting process. A meta-analysis using this model to organize and classify past research found that when used appropriately (i.e., there is a fit between the GSS structures and the task and the group receives appropriation support), GSS use increased the number of ideas generated, took less time, and led to more satisfied participants than if the group worked without the GSS. Fitting the GSS to the task had the most impact on outcome effectiveness (decision quality and ideas), while appropriation support had the most impact on the process (time required and process satisfaction). We conclude that when using this theoretical lens, the results of GSS research do not appear inconsistent.", "title": "" }, { "docid": "c649afa8161bf3b74b399537bb6dd0d3", "text": "Bad weather conditions such as fog, haze, and dust often reduce the performance of outdoor cameras. To improve the effectiveness of surveillance and in-vehicle cameras under such conditions, we propose a method based on a dark channel prior for quickly defogging images. It first estimates the intensity of the atmospheric light by searching the sky area in the foggy image. Then it estimates the transmission map by refining a coarse map from a fine map. Finally, it produces a clearer image from the foggy image by using the estimated intensity and the transmission map. When implemented on a notebook PC with a graphics processing unit (GPU), it was able to process 50 images (720 × 480 pixels) per second. It can thus be used for real-time processing of surveillance and in-vehicle system images.", "title": "" }, { "docid": "026b95eaf171fae89fed3d4069a04482", "text": "The field of antibiotic drug discovery and the monitoring of new antibiotic resistance elements have yet to fully exploit the power of the genome revolution. Despite the fact that the first genomes sequenced of free living organisms were those of bacteria, there have been few specialized bioinformatic tools developed to mine the growing amount of genomic data associated with pathogens. In particular, there are few tools to study the genetics and genomics of antibiotic resistance and how it impacts bacterial populations, ecology, and the clinic. We have initiated development of such tools in the form of the Comprehensive Antibiotic Research Database (CARD; http://arpcard.mcmaster.ca). The CARD integrates disparate molecular and sequence data, provides a unique organizing principle in the form of the Antibiotic Resistance Ontology (ARO), and can quickly identify putative antibiotic resistance genes in new unannotated genome sequences. This unique platform provides an informatic tool that bridges antibiotic resistance concerns in health care, agriculture, and the environment.", "title": "" }, { "docid": "35ccabf8ee222ac6e5bf0312d4331819", "text": "Brownian Dynamics (BD), also known as Langevin Dynamics, and Dissipative Particle Dynamics (DPD) are implicit solvent methods commonly used in models of soft matter and biomolecular systems. The interaction of the numerous solvent particles with larger particles is coarse-grained as a Langevin thermostat is applied to individual particles or to particle pairs. The Langevin thermostat requires a pseudo-random number generator (PRNG) to generate the stochastic force applied to each particle or pair of neighboring particles during each time step in the integration of Newton’s equations of motion. In a SingleInstruction-Multiple-Thread (SIMT) GPU parallel computing environment, small batches of random numbers must be generated over thousands of threads and millions of kernel calls. In this communication we introduce a one-PRNG-per-kernel-call-per-thread scheme, in which a micro-stream of pseudorandom numbers is generated in each thread and kernel call. These high quality, statistically robust micro-streams require no global memory for state storage, are more computationally efficient than other PRNG schemes in memorybound kernels, and uniquely enable the DPD simulation method without requiring communication between threads. 2011 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "2ee0eb9ab9d6c5b9bdad02b9f95c8691", "text": "Aim: To describe lower extremity injuries for badminton in New Zealand. Methods: Lower limb badminton injuries that resulted in claims accepted by the national insurance company Accident Compensation Corporation (ACC) in New Zealand between 2006 and 2011 were reviewed. Results: The estimated national injury incidence for badminton injuries in New Zealand from 2006 to 2011 was 0.66%. There were 1909 lower limb badminton injury claims which cost NZ$2,014,337 (NZ$ value over 2006 to 2011). The age-bands frequently injured were 10–19 (22%), 40–49 (22%), 30–39 (14%) and 50–59 (13%) years. Sixty five percent of lower limb injuries were knee ligament sprains/tears. Males sustained more cruciate ligament sprains than females (75 vs. 39). Movements involving turning, changing direction, shifting weight, pivoting or twisting were responsible for 34% of lower extremity injuries. Conclusion: The knee was most frequently OPEN ACCESS", "title": "" } ]
scidocsrr
e5f2c3e5b220782a3b7dc1130eb5d4f2
Modeling the World from Internet Photo Collections
[ { "docid": "efe8cf69a4666151603393032af22d8a", "text": "In this paper we present and discuss the findings of a study that investigated how people manage their collections of digital photographs. The six-month, 13-participant study included interviews, questionnaires, and analysis of usage statistics gathered from an instrumented digital photograph management tool called Shoebox. Alongside simple browsing features such as folders, thumbnails and timelines, Shoebox has some advanced multimedia features: content-based image retrieval and speech recognition applied to voice annotations. Our results suggest that participants found their digital photos much easier to manage than their non-digital ones, but that this advantage was almost entirely due to the simple browsing features. The advanced features were not used very often and their perceived utility was low. These results should help to inform the design of improved tools for managing personal digital photographs.", "title": "" } ]
[ { "docid": "3776b7fdcd1460b60a18c87cd60b639e", "text": "A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.", "title": "" }, { "docid": "e6b92ef03e801af68cb2660e6ff74902", "text": "In the past two decades, there has been much interest in applying neural networks to financial time series forecasting. Yet, there has been relatively little attention paid to selecting the input features for training these networks. This paper presents a novel CARTMAP neural network based on Adaptive Resonance Theory that incorporates automatic, intuitive, transparent, and parsimonious feature selection with fast learning. On average, over three separate 4-year simulations spanning 2004–2009 of Dow Jones Industrial Average stocks, CARTMAP outperformed related and classical alternatives. The alternatives were an industry standard random walk, a regression model, a general purpose ARTMAP, and ARTMAP with stepwise feature selection. This paper also discusses why the novel feature selection scheme outperforms the alternatives and how it can represent a step toward more transparency in financial modeling.", "title": "" }, { "docid": "0d0b9d20032feb4178a3c98f2787cb8d", "text": "To address the problem of detecting malicious codes in malware and extracting the corresponding evidences in mobile devices, we construct a consortium blockchain framework, which is composed of a detecting consortium chain shared by test members and a public chain shared by users. Specifically, in view of different malware families in Android-based system, we perform feature modeling by utilizing statistical analysis method, so as to extract malware family features, including software package feature, permission and application feature, and function call feature. Moreover, for reducing false-positive rate and improving the detecting ability of malware variants, we design a multi-feature detection method of Android-based system for detecting and classifying malware. In addition, we establish a fact-base of distributed Android malicious codes by blockchain technology. The experimental results show that, compared with the previously published algorithms, the new proposed method can achieve higher detection accuracy in limited time with lower false-positive and false-negative rates.", "title": "" }, { "docid": "1610802593a60609bc1213762a9e0584", "text": "We examined emotional stability, ambition (an aspect of extraversion), and openness as predictors of adaptive performance at work, based on the evolutionary relevance of these traits to human adaptation to novel environments. A meta-analysis on 71 independent samples (N = 7,535) demonstrated that emotional stability and ambition are both related to overall adaptive performance. Openness, however, does not contribute to the prediction of adaptive performance. Analysis of predictor importance suggests that ambition is the most important predictor for proactive forms of adaptive performance, whereas emotional stability is the most important predictor for reactive forms of adaptive performance. Job level (managers vs. employees) moderates the effects of personality traits: Ambition and emotional stability exert stronger effects on adaptive performance for managers as compared to employees.", "title": "" }, { "docid": "f7eabf95f8d099a102c76c071c82b4ef", "text": "b. general, timeless truths, such as physical laws or customs The earth revolves around the sun. c. states It is cloudy. d. subordinate clauses of time or condition when the main clause contains a future-time verb When she comes, we’ll find out. e. events or actions in the present, such as in sporting events The goal counts! f. speech acts in the present I nominate Chris. g. conversational historical present (in narration) “So he enters the room and crosses over to the other side without looking at anyone.” h. events scheduled in the future My flight departs at 9 a.m. tomorrow.", "title": "" }, { "docid": "a0640bbfa22020e216d4ab5dfefa9bc0", "text": "Clozapine has demonstrated superior efficacy in relieving positive and negative symptoms in treatment-resistant schizophrenic patients; unlike other antipsychotics, it causes minimal extrapyramidal side effects (EPS) and has little effect on serum prolactin. Despite these benefits, the use of clozapine has been limited because of infrequent but serious side effects, the most notable being agranulocytosis. In recent years, however, mandatory blood monitoring has significantly reduced both the incidence of agranulocytosis and its associated mortality. The occurrence of seizures appears to be dose-related and can generally be managed by reduction in clozapine dosage. Less serious and more common side effects of clozapine including sedation, hypersalivation, tachycardia, hypotension, hypertension, weight gain, constipation, urinary incontinence, and fever can often be managed medically and are generally tolerated by the patient. Appropriate management of clozapine side effects facilitates a maximization of the benefits of clozapine treatment, and physicians and patients alike should be aware that there is a range of benefits to clozapine use that is wider than its risks.", "title": "" }, { "docid": "61d29b80bcea073665f454444a3b0262", "text": "Nitric oxide (NO) is the principal mediator of penile erection. NO is synthesized by nitric oxide synthase (NOS). It has been well documented that the major causative factor contributing to erectile dysfunction in diabetic patients is the reduction in the amount of NO synthesis in the corpora cavernosa of the penis resulting in alterations of normal penile homeostasis. Arginase is an enzyme that shares a common substrate with NOS, thus arginase may downregulate NO production by competing with NOS for this substrate, l-arginine. The purpose of the present study was to compare arginase gene expression, protein levels, and enzyme activity in diabetic human cavernosal tissue. When compared to normal human cavernosal tissue, diabetic corpus cavernosum from humans with erectile dysfunction had higher levels of arginase II protein, gene expression, and enzyme activity. In contrast, gene expression and protein levels of arginase I were not significantly different in diabetic cavernosal tissue when compared to control tissue. The reduced ability of diabetic tissue to convert l-arginine to l-citrulline via nitric oxide synthase was reversed by the selective inhibition of arginase by 2(S)-amino-6-boronohexanoic acid (ABH). These data suggest that the increased expression of arginase II in diabetic cavernosal tissue may contribute to the erectile dysfunction associated with this common disease process and may play a role in other manifestations of diabetic disease in which nitric oxide production is decreased.", "title": "" }, { "docid": "94c7fde13a5792a89b7575ac41827f1c", "text": "The noise sensitivities of nine different QRS detection algorithms were measured for a normal, single-channel, lead-II, synthesized ECG corrupted with five different types of synthesized noise: electromyographic interference, 60-Hz power line interference, baseline drift due to respiration, abrupt baseline shift, and a composite noise constructed from all of the other noise types. The percentage of QRS complexes detected, the number of false positives, and the detection delay were measured. None of the algorithms were able to detect all QRS complexes without any false positives for all of the noise types at the highest noise level. Algorithms based on amplitude and slope had the highest performance for EMG-corrupted ECG. An algorithm using a digital filter had the best performance for the composite-noise-corrupted data.<<ETX>>", "title": "" }, { "docid": "4d04debb13948f73e959929dbf82e139", "text": "DynaMIT is a simulation-based real-time system designed to estimate the current state of a transportation network, predict future tra c conditions, and provide consistent and unbiased information to travelers. To perform these tasks, e cient simulators have been designed to explicitly capture the interactions between transportation demand and supply. The demand re ects both the OD ow patterns and the combination of all the individual decisions of travelers while the supply re ects the transportation network in terms of infrastructure, tra c ow and tra c control. This paper describes the design and speci cation of these simulators, and discusses their interactions. Massachusetts Institute of Technology, Dpt of Civil and Environmental Engineering, Cambridge, Ma. Email: mba@mit.edu Ecole Polytechnique F ed erale de Lausanne, Dpt. of Mathematics, CH-1015 Lausanne, Switzerland. Email: michel.bierlaire@ep .ch Volpe National Transportation Systems Center, Dpt of Transportation, Cambridge, Ma. Email: koutsopoulos@volpe.dot.gov The Ohio State University, Columbus, Oh. Email: mishalani.1@osu.edu", "title": "" }, { "docid": "9530fccd1de438b9beb59c954da29a69", "text": "INTRODUCTION\nDefecation pain is a common problem with many etiologies implicated. Elucidating a cause requires a thorough medical history, examination and appropriate investigations, which may include endoscopy, barium enema, examination under anesthesia and magnetic resonance imaging or computed tomography. Coccydynia is a term used to describe pain in the region of the coccyx, often due to abnormal mobility of the coccyx. Non-surgical management options remain the gold-standard for coccydynia with surgery being reserved for complicated cases.\n\n\nCASE PRESENTATION\nThis is a case of a 67-year-old Caucasian man who presented with a two-and-a-half-year history of worsening rectal pain.\n\n\nCONCLUSION\nTo the best of our knowledge, we describe the first case in the literature of an abnormally mobile anteverted coccyx causing predominantly defecation pain and coccydynia, successfully treated by coccygectomy. When first-line investigations fail to elucidate a cause of defecation pain one must, in the presence of unusual symptoms, consider musculoskeletal pathologies emanating from the coccyx and an orthopedic consultation must then be sought for diagnostic purposes.", "title": "" }, { "docid": "b248655d158da77d257a243ee331aa34", "text": "Paraphrase identification is a fundamental task in natural language process areas. During the process of fulfilling this challenge, different features are exploited. Semantically equivalence and syntactic similarity are of the most importance. Apart from advance feature extraction, deep learning based models are also proven their promising in natural language process jobs. As a result in this research, we adopted an interactive representation to modelling the relationship between two sentences not only on word level, but also on phrase and sentence level by employing convolution neural network to conduct paraphrase identification by using semantic and syntactic features at the same time. The experimental study on commonly used MSRP has shown the proposed method's promising potential.", "title": "" }, { "docid": "c3df0da617368c2472c76a6c95366338", "text": "The infinitary propositional logic of here-and-there is important for the theory of answer set programming in view of its relation to strongly equivalent transformations of logic programs. We know a formal system axiomatizing this logic exists, but a proof in that system may include infinitely many formulas. In this note we describe a relationship between the validity of infinitary formulas in the logic of here-and-there and the provability of formulas in some finite deductive systems. This relationship allows us to use finite proofs to justify the validity of infinitary formulas.", "title": "" }, { "docid": "a7fe7068ce05260603ca697a8e5e8410", "text": "In this paper, we will introduce our newly developed 3D simulation system for miniature unmanned aerial vehicles (UAVs) navigation and control in GPS-denied environments. As we know, simulation technologies can verify the algorithms and identify potential problems before the actual flight test and to make the physical implementation smoothly and successfully. To enhance the capability of state-of-the-art of research-oriented UAV simulation system, we develop a 3D simulator based on robot operation system (ROS) and a game engine, Unity3D. Unity3D has powerful graphics and can support high-fidelity 3D environments and sensor modeling which is important when we simulate sensing technologies in cluttered and harsh environments. On the other hand, ROS can provide clear software structure and simultaneous operation between hardware devices for actual UAVs. By developing data transmitting interface and necessary sensor modeling techniques, we have successfully glued ROS and Unity together. The integrated simulator can handle real-time multi-UAV navigation and control algorithms, including online processing of a large number of sensor data.", "title": "" }, { "docid": "a5cee6dc248da019159ba7d769406928", "text": "Coffee is one of the most consumed beverages in the world and is the second largest traded commodity after petroleum. Due to the great demand of this product, large amounts of residues are generated in the coffee industry, which are toxic and represent serious environmental problems. Coffee silverskin and spent coffee grounds are the main coffee industry residues, obtained during the beans roasting, and the process to prepare “instant coffee”, respectively. Recently, some attempts have been made to use these residues for energy or value-added compounds production, as strategies to reduce their toxicity levels, while adding value to them. The present article provides an overview regarding coffee and its main industrial residues. In a first part, the composition of beans and their processing, as well as data about the coffee world production and exportation, are presented. In the sequence, the characteristics, chemical composition, and application of the main coffee industry residues are reviewed. Based on these data, it was concluded that coffee may be considered as one of the most valuable primary products in world trade, crucial to the economies and politics of many developing countries since its cultivation, processing, trading, transportation, and marketing provide employment for millions of people. As a consequence of this big market, the reuse of the main coffee industry residues is of large importance from environmental and economical viewpoints.", "title": "" }, { "docid": "1580a496e78f9dc5599201db32e4ab94", "text": "Path planning is one of the key technologies in the robot research. The aim of it is to find the shortest safe path in the objective environments. Firstly, the robot is transformed into particle by expanding obstacles method; the obstacle is transformed into particle by multi-round enveloping method. Secondly, we make the Voronoi graph of the particles of obstacle and find the skeleton topology about the feasible path. Following, a new arithmetic named heuristic bidirectional ant colony algorithm is proposed by joining the merit of ant colony algorithm, Dijkstra algorithm and heuristic algorithm, with which we can find the shortest path of the skeleton topology. After transforming the path planning into n-dimensions quadrate feasible region by coordinate transformation and solving it with particle swarm optimization, the optimization of the path planning is acquired.", "title": "" }, { "docid": "f92d8e163f3f4665bafaa2d662a3fb57", "text": "Mobile cloud computing utilizing cloudlet is an emerging technology to improve the quality of mobile services. In this paper, to better overcome the main bottlenecks of the computation capability of cloudlet and the wireless bandwidth between mobile devices and cloudlet, we consider the multi-resource allocation problem for the cloudlet environment with resource-intensive and latency-sensitive mobile applications. The proposed multi-resource allocation strategy enhances the quality of mobile cloud service, in terms of the system throughput (the number of admitted mobile applications) and the service latency. We formulate the resource allocation model as a semi-Markov decision process under the average cost criterion, and solve the optimization problem using linear programming technology. Through maximizing the long-term reward while meeting the system requirements of the request blocking probability and service time latency, an optimal resource allocation policy is calculated. From simulation result, it is indicated that the system adaptively adjusts the allocation policy about how much resource to allocate and whether to utilize the distant cloud according to the traffic of mobile service requests and the availability of the resource in the system. Our algorithm outperforms greedy admission control over a broad range of environments.", "title": "" }, { "docid": "20926ad65458e5dc7c187ba40808f547", "text": "The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even. TYPE OF PAPER AND", "title": "" }, { "docid": "3a50010ad60475ceed77a3edd435a148", "text": "Social networking sites (SNS) affordances for persistent interaction, collective generation of knowledge, and formation of peer-based clusters for knowledge sharing render them useful for developing constructivist knowledge environments. However, notwithstanding their academic value, these environments are not necessarily insulated from the exercise of academic/ power. Despite a growing corpus of literature on SNS’s capacity to enhance social capital formation, foster trust, and connect interactants in remote locations, there is a dearth of research on how SNS potentially leverages academic /power relations in university settings. Mindful of the unsubstantiated nexus between power relations, knowledge construction, and academic appropriation of SNS, unraveling the impact of SNS on lecturer-student and student-peer power relations in the university can illuminate the understanding of this academic connection/puzzle. This work employs Critical Theory of Technology (CTT) and virtual case study method to explore the influence of SNS use on power relations of lecturers, students, and their peers in a blended (Facebookenhanced) Information Technology course at a middle-sized South African university. The findings demonstrate that academic appropriation of SNS differentially empower academics and students at different times, and students employ various forms of sophisticated authorial language to territorialise power in their interactions with lecturers and peers. Academics and instructional designers are urged to examine different forms of language employed in lecturer-student and student-peer discourses to grasp student learning needs and to foster meaningful, knowledge-rich learning environments.", "title": "" }, { "docid": "c8f39a710ca3362a4d892879f371b318", "text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.", "title": "" }, { "docid": "ab2b1a7d3b279e91ef9dc28c79af02ee", "text": "Online Social Networks (OSNs) have not only significantly reformed the social interaction pattern but have also emerged as an effective platform for recommendation of services and products. The upswing in use of the OSNs has also witnessed growth in unwanted activities on social media. On the one hand, the spammers on social media can be a high risk towards the security of legitimate users and on the other hand some of the legitimate users, such as bloggers can pollute the results of recommendation systems that work alongside the OSNs. The polluted results of recommendation systems can be precarious to the masses that track recommendations. Therefore, it is necessary to segregate such type of users from the genuine experts. We propose a framework that separates the spammers and unsolicited bloggers from the genuine experts of a specific domain. The proposed approach employs modified Hyperlink Induced Topic Search (HITS) to separate the unsolicited bloggers from the experts on Twitter on the basis of tweets. The approach considers domain specific keywords in the tweets and several tweet characteristics to identify the unsolicited bloggers. Experimental results demonstrate the effectiveness of the proposed methodology as compared to several state-of-the-art approaches and classifiers.", "title": "" } ]
scidocsrr
94c85cc50460ff94c9a2c34312645307
A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion
[ { "docid": "69b831bb25e5ad0f18054d533c313b53", "text": "In recent years, indoor positioning has emerged as a critical function in many end-user applications; including military, civilian, disaster relief and peacekeeping missions. In comparison with outdoor environments, sensing location information in indoor environments requires a higher precision and is a more challenging task in part because various objects reflect and disperse signals. Ultra WideBand (UWB) is an emerging technology in the field of indoor positioning that has shown better performance compared to others. In order to set the stage for this work, we provide a survey of the state-of-the-art technologies in indoor positioning, followed by a detailed comparative analysis of UWB positioning technologies. We also provide an analysis of strengths, weaknesses, opportunities, and threats (SWOT) to analyze the present state of UWB positioning technologies. While SWOT is not a quantitative approach, it helps in assessing the real status and in revealing the potential of UWB positioning to effectively address the indoor positioning problem. Unlike previous studies, this paper presents new taxonomies, reviews some major recent advances, and argues for further exploration by the research community of this challenging problem space.", "title": "" }, { "docid": "5e61c7ae0e4301ab5736d0862ade4152", "text": "This paper investigates the use of refocused optical mouse sensors for odometry in the field of outdoor robotic navigation. Optical mouse sensors like the ADNS-2610 are small, inexpensive, non-contact devices, which integrate a complementary metal-oxide semiconductor camera and DSP hardware to provide 2-D optical displacement measurements. Current research indicates that vertical height variance contributes as a dominant cause of systematic error to horizontal displacement measurements, which raises significant problems for irregular environments encountered in outdoor robotic navigation. In this paper, we propose two approaches to mitigate this systematic error induced by height variance. The efficacy and robustness of the proposed approaches are tested by experimentation on an asphalt concrete road surface and by simulation.", "title": "" } ]
[ { "docid": "60b3b99b717c844702f1b30b57942dfa", "text": "The recent advances in distributed energy systems require new models for exchanging energy among prosumers in microgrids. The blockchain technology promises to solve the digital issues related to distributed systems without a trusted authority and to allow quick and secure energy transactions, which are verified and cryptographically protected. Transactions are approved and subsequently recorded on all the machines participating in the blockchain. This work demonstrates how users, which are nodes of the energy and digital networks, exchange energy supported by a customized blockchain based on Tendermint. We focus on the procedures for generating blocks and defining data structures for storing energy transactions.", "title": "" }, { "docid": "64fb3fdb4f37ee75b1506c2fdb09cf7a", "text": "With the proliferation of mobile devices, cloud-based photo sharing and searching services are becoming common du e to the mobile devices’ resource constrains. Meanwhile, the r is also increasing concern about privacy in photos. In this wor k, we present a framework SouTu, which enables cloud servers to provide privacy-preserving photo sharing and search as a se rvice to mobile device users. Privacy-seeking users can share the ir photos via our framework to allow only their authorized frie nds to browse and search their photos using resource-bounded mo bile devices. This is achieved by our carefully designed archite cture and novel outsourced privacy-preserving computation prot ocols, through which no information about the outsourced photos or even the search contents (including the results) would be revealed to the cloud servers. Our framework is compatible with most of the existing image search technologies, and it requi res few changes to the existing cloud systems. The evaluation of our prototype system with 31,772 real-life images shows the communication and computation efficiency of our system.", "title": "" }, { "docid": "bd29789dbb5c9135accf655f5122f492", "text": "Nowadays all-digital resolver-to-digital conversion is popularly invested for space-limited applications. However time delay is inevitable in demodulation part when frequency shifting algorithm used. Furthermore, the angle calculation result is affected by time delay. So the synchronous demodulation is analyzed in this paper to void time delay. The demodulated sine and cosine signals always exist with amplitude and quadrature errors. In order to eliminate the influence of these two errors, the double synchronous reference frame-based phase-locked loop (DSRF-PLL) is investigated in the angle calculation part. DSRF-PLL removes the angular position error caused by the amplitude error and makes the position error caused by the quadrature error to be a constant value, which can be easily compensated in the software. So the angular position can be detected with no time delay and error. The presented all-digital resolver-to-digital conversion scheme is verified by the simulation and experimental results at the end of this paper.", "title": "" }, { "docid": "6ffbb212bec4c90c6b37a9fde3fd0b4c", "text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.", "title": "" }, { "docid": "488c52d028d18227f456cb3383784d05", "text": "For smart grid execution, one of the most important requirements is fast, precise, and efficient synchronized measurements, which are possible by phasor measurement unit (PMU). To achieve fully observable network with the least number of PMUs, optimal placement of PMU (OPP) is crucial. In trying to achieve OPP, priority may be given at critical buses, generator buses, or buses that are meant for future extension. Also, different applications will have to be kept in view while prioritizing PMU placement. Hence, OPP with multiple solutions (MSs) can offer better flexibility for different placement strategies as it can meet the best solution based on the requirements. To provide MSs, an effective exponential binary particle swarm optimization (EBPSO) algorithm is developed. In this algorithm, a nonlinear inertia-weight-coefficient is used to improve the searching capability. To incorporate previous position of particle, two innovative mathematical equations that can update particle's position are formulated. For quick and reliable convergence, two useful filtration techniques that can facilitate MSs are applied. Single mutation operator is conditionally applied to avoid stagnation. The EBPSO algorithm is so developed that it can provide MSs for various practical contingencies, such as single PMU outage and single line outage for different systems.", "title": "" }, { "docid": "dd45abc886edb854707acde3e675c5f7", "text": "The connecting of physical units, such as thermostats, medical devices and self-driving vehicles, to the Internet is happening very quickly and will most likely continue to increase exponentially for some time to come. Valid concerns about security, safety and privacy do not appear to be hampering this rapid growth of the so-called Internet of Things (IoT). There have been many popular and technical publications by those in software engineering, cyber security and systems safety describing issues and proposing various “fixes.” In simple terms, they address the “why” and the “what” of IoT security, safety and privacy, but not the “how.” There are many cultural and economic reasons why security and privacy concerns are relegated to lower priorities. Also, when many systems are interconnected, the overall security, safety and privacy of the resulting systems of systems generally have not been fully considered and addressed. In order to arrive at an effective enforcement regime, we will examine the costs of implementing suitable security, safety and privacy and the economic consequences of failing to do so. We evaluated current business, professional and government structures and practices for achieving better IoT security, safety and privacy, and found them lacking. Consequently, we proposed a structure for ensuring that appropriate security, safety and privacy are built into systems from the outset. Within such a structure, enforcement can be achieved by incentives on one hand and penalties on the other. Determining the structures and rules necessary to optimize the mix of penalties and incentives is a major goal of this paper.", "title": "" }, { "docid": "55745523b43b49ef02bf5e7628f7be84", "text": "A fabrication process for the simultaneous shaping of arrays of glass shells on a wafer level is introduced in this paper. The process is based on etching cavities in silicon, followed by anodic bonding of a thin glass wafer to the etched silicon wafer. The bonded wafers are then heated inside a furnace at a temperature above the softening point of the glass, and due to the expansion of the trapped gas in the silicon cavities the glass is blown into three-dimensional spherical shells. An analytical model which can be used to predict the shape of the glass shells is described and demonstrated to match the experimental data. The ability to blow glass on a wafer level may enable novel capabilities including mass-production of microscopic spherical gas confinement chambers, microlenses, and complex microfluidic networks", "title": "" }, { "docid": "3e9f338da297c5173cf075fa15cd0a2e", "text": "Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.", "title": "" }, { "docid": "ac9f71a97f6af0718587ffd0ea92d31d", "text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889", "title": "" }, { "docid": "c3b6d3b81153637d104efa5382a7a0c8", "text": "The convex relaxation approaches for power system state estimation (PSSE) offer robust alternatives to the conventional PSSE algorithms, by avoiding local optima and providing guaranteed convergence, critical especially when the states deviate significantly from the nominal conditions. On the other hand, the associated semidefinite programming problem may be computationally demanding. In this work, a variable splitting technique called alternating direction method of multipliers is employed to reduce the complexity, and also efficiently accommodate a regularizer promoting desired low-rank matrix solutions. Both static and online formulations are developed. Numerical tests verify the efficacy of the proposed techniques.", "title": "" }, { "docid": "1872c5cc4638a525517940e606e9db2f", "text": "Cyclic Redundancy Check is playing a vital role in the networking environment to detect the errors. With challenging speed of transmitting data and to synchronize with speed, it’s necessary to increase speed of CRC generation. This paper presents 64 bits parallel CRC architecture based on F-matrix with order of generator polynomial is 32. Implemented design is hardware efficient and requires 50% less cycles to generate CRC with same order of generator polynomial. CRC32 bit is used in Ethernet frame for error detection. The whole design is functionally developed and verified using Xilinx ISE 12.3i Simulator.", "title": "" }, { "docid": "be17532b93e28edb4f73462cfe17f96d", "text": "OBJECTIVES\nThe purpose of this study was to conduct a review of randomized controlled trials (RCTs) to determine the treatment effectiveness of the combination of manual therapy (MT) with other physical therapy techniques.\n\n\nMETHODS\nSystematic searches of scientific literature were undertaken on PubMed and the Cochrane Library (2004-2014). The following terms were used: \"patellofemoral pain syndrome,\" \"physical therapy,\" \"manual therapy,\" and \"manipulation.\" RCTs that studied adults diagnosed with patellofemoral pain syndrome (PFPS) treated by MT and physical therapy approaches were included. The quality of the studies was assessed by the Jadad Scale.\n\n\nRESULTS\nFive RCTs with an acceptable methodological quality (Jadad ≥ 3) were selected. The studies indicated that MT combined with physical therapy has some effect on reducing pain and improving function in PFPS, especially when applied on the full kinetic chain and when strengthening hip and knee muscles.\n\n\nCONCLUSIONS\nThe different combinations of MT and physical therapy programs analyzed in this review suggest that giving more emphasis to proximal stabilization and full kinetic chain treatments in PFPS will help better alleviation of symptoms.", "title": "" }, { "docid": "e99eceb3072dc2798071fe9d65d30c3a", "text": "With the vast availability of traffic sensors from which traffic information can be derived, a lot of research effort has been devoted to developing traffic prediction techniques, which in turn improve route navigation, traffic regulation, urban area planning, etc. One key challenge in traffic prediction is how much to rely on prediction models that are constructed using historical data in real-time traffic situations, which may differ from that of the historical data and change over time. In this paper, we propose a novel online framework that could learn from the current traffic situation (or context) in real-time and predict the future traffic by matching the current situation to the most effective prediction model trained using historical data. As real-time traffic arrives, the traffic context space is adaptively partitioned in order to efficiently estimate the effectiveness of each base predictor in different situations. We obtain and prove both short-term and long-term performance guarantees (bounds) for our online algorithm. The proposed algorithm also works effectively in scenarios where the true labels (i.e., realized traffic) are missing or become available with delay. Using the proposed framework, the context dimension that is the most relevant to traffic prediction can also be revealed, which can further reduce the implementation complexity as well as inform traffic policy making. Our experiments with real-world data in real-life conditions show that the proposed approach significantly outperforms existing solutions.", "title": "" }, { "docid": "ef1d9f9c22408641285aa7b088d44d75", "text": "Short text stream classification is a challengingand significant task due to the characteristics of short length, weak signal, high velocity and especially topic drifting in short text stream. However, this challenge has received little attention from the research community. Motivated by this, we propose a new feature extension approach for short text stream classification using a large scale, general purpose semantic network obtained from a web corpus. Our approach is built on an incremental ensemble classification model. First, in terms of the open semantic network, we introduce more semantic contexts in short texts to make up of the data sparsity. Meanwhile, we disambiguate terms by their semantics to reduce the noise impact. Second, to effectively track hidden topic drifts, we propose a concept cluster based topic drifting detection method. Finally, extensive experiments demonstratethat our approach can detect topic drifts effectively compared to several well-known concept drifting detection methods in data streams. Meanwhile, our approach can perform best in the classification of text data streams compared to several stateof-the-art short text classification approaches.", "title": "" }, { "docid": "cf48a139219a096a5e75e5462ed492d1", "text": "Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actorcritic systems. However, compared to singleobjective optimization, game dynamics are more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.", "title": "" }, { "docid": "240ce581d80ad7fd604bbbef60066820", "text": "In this paper, we present subgraph2vec, a novel approach for learning latent representations of rooted subgraphs from large graphs inspired by recent advancements in Deep Learning and Graph Kernels. These latent representations encode semantic substructure dependencies in a continuous vector space, which is easily exploited by statistical models for tasks such as graph classification, clustering, link prediction and community detection. subgraph2vec leverages on local information obtained from neighbourhoods of nodes to learn their latent representations in an unsupervised fashion. We demonstrate that subgraph vectors learnt by our approach could be used in conjunction with classifiers such as CNNs, SVMs and relational data clustering algorithms to achieve significantly superior accuracies. Also, we show that the subgraph vectors could be used for building a deep learning variant of Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and large-scale real-world datasets reveal that subgraph2vec achieves significant improvements in accuracies over existing graph kernels on both supervised and unsupervised learning tasks. Specifically, on two realworld program analysis tasks, namely, code clone and malware detection, subgraph2vec outperforms state-of-the-art kernels by more than 17% and 4%, respectively.", "title": "" }, { "docid": "fb8b90ccf64f64e7f5c4e2c6718107df", "text": "The Standardized Precipitation Evapotranspiration Index (SPEI) was developed in 2010 and has been used in an increasing number of climatology and hydrology studies. The objective of this article is to describe computing options that provide flexible and robust use of the SPEI. In particular, we present methods for estimating the parameters of the log-logistic distribution for obtaining standardized values, methods for computing reference evapotranspiration (ET0), and weighting kernels used for calculation of the SPEI at different time scales. We discuss the use of alternative ET0 and actual evapotranspiration (ETa) methods and different options on the resulting SPEI series by use of observational and global gridded data. The results indicate that the equation used to calculate ET0 can have a significant effect on the SPEI in some regions of the world. Although the original formulation of the SPEI was based on plotting-positions Probability Weighted Moment (PWM), we now recommend use of unbiased PWM for model fitting. Finally, we present new software tools for computation and analysis of SPEI series, an updated global gridded database, and a realtime drought-monitoring system.", "title": "" }, { "docid": "a227304b25f807c673444853dde0c28e", "text": "This paper introduces novel vacuum/compression valves (VCVs) utilizing paraffin wax. A VCV is implemented by sealing the venting channel/hole with wax plugs (for normally-closed valve), or to be sealed by wax (for normally-open valve), and is activated by localized heating on the CD surface. We demonstrate that the VCV provides the advantages of avoiding unnecessary heating of the sample/reagents in the diagnostic process, allowing for vacuum sealing of the CD, and clear separation of the paraffin wax from the sample/reagents in the microfluidic process. As a proof of concept, the microfluidic processes of liquid flow switching and liquid metering is demonstrated with the VCV. Results show that the VCV lowers the required spinning frequency to perform the microfluidic processes with high accuracy and ease of control.", "title": "" }, { "docid": "2946b8bd377019a2c475ea3e4fbd5df0", "text": "OBJECTIVE\nTo present a retrospective study of 16 patients submitted to hip disarticulation.\n\n\nMETHODS\nDuring the period of 16 years, 16 patients who underwent hip disarticulation were identified. All of them were studied based on clinical records regarding the gender, age at surgery, disarticulation cause, postoperative complications, mortality rates and functional status after hip disarticulation.\n\n\nRESULTS\nHip disarticulation was performed electively in most cases and urgently in only three cases. The indications had the following origins: infection (n = 6), tumor (n = 6), trauma (n = 3), and ischemia (n = 2). The mean post-surgery survival was 200.5 days. The survival rates were 6875% after six months, 5625% after one year, and 50% after three years. The mortality rates were higher in disarticulations with traumatic (66.7%) and tumoral (60%) causes. Regarding the eight patients who survived, half of them ambulate with crutches and without prosthesis, 25% walk with limb prosthesis, and 25% are bedridden. Complications and mortality were higher in the cases of urgent surgery, and in those with traumatic and tumoral causes.\n\n\nCONCLUSION\nHip disarticulation is a major ablative surgery with obvious implications for limb functionality, as well as high rates of complications and mortality. However, when performed at the correct time and with proper indication, this procedure can be life-saving and can ensure the return to the home environment with a certain degree of quality of life.", "title": "" }, { "docid": "998fe25641f4f6dc6649b02226c5e86a", "text": "We present the malicious administrator problem, in which one or more network administrators attempt to damage routing, forwarding, or network availability by misconfiguring controllers. While this threat vector has been acknowledged in previous work, most solutions have focused on enforcing specific policies for forwarding rules. We present a definition of this problem and a controller design called Fleet that makes a first step towards addressing this problem. We present two protocols that can be used with the Fleet controller, and argue that its lower layer deployed on top of switches eliminates many problems of using multiple controllers in SDNs. We then present a prototype simulation and show that as long as a majority of non-malicious administrators exists, we can usually recover from link failures within several seconds (a time dominated by failure detection speed and inter-administrator latency).", "title": "" } ]
scidocsrr
e84e77015e717c37639666de303ee81b
Compressed knowledge transfer via factorization machine for heterogeneous collaborative recommendation
[ { "docid": "91f718a69532c4193d5e06bf1ea19fd3", "text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.", "title": "" }, { "docid": "7ad0c164ece34159f9051c1510761aa8", "text": "Collaborative filtering (CF) is a major technique in recommender systems to help users find their potentially desired items. Since the data sparsity problem is quite commonly encountered in real-world scenarios, Cross-Domain Collaborative Filtering (CDCF) hence is becoming an emerging research topic in recent years. However, due to the lack of sufficient dense explicit feedbacks and even no feedback available in users' uninvolved domains, current CDCF approaches may not perform satisfactorily in user preference prediction. In this paper, we propose a generalized Cross Domain Triadic Factorization (CDTF) model over the triadic relation user-item-domain, which can better capture the interactions between domain-specific user factors and item factors. In particular, we devise two CDTF algorithms to leverage user explicit and implicit feedbacks respectively, along with a genetic algorithm based weight parameters tuning algorithm to trade off influence among domains optimally. Finally, we conduct experiments to evaluate our models and compare with other state-of-the-art models by using two real world datasets. The results show the superiority of our models against other comparative models.", "title": "" } ]
[ { "docid": "4ec0aacd0ded1f775f8e9cadaff1513c", "text": "of a dissertation at the University of Miami. Dissertation supervised by Professor Abhishek Prasad. No. of pages in text. (82) Objective: Brain machine interface (BMI) or Brain Computer Interface (BCI) provides a direct pathway between the brain and an external device to help people suffering from severely impaired motor function by decoding brain activities and translating human intentions into control signals. Conventionally, the decoding pipeline for BMIs consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes the whole system difficult to be adaptive. Our goal is to create differentiable signal processing modules and plug them together to build an adaptive online system. The system could be trained with a single objective function and a single learning algorithm so that each component can be updated in parallel to increase the performance in a robust manner. We use deep neural networks to address these needs. Main Results: We predicted the finger trajectory using Electrocorticography (ECoG) signals and compared results for the Least Angle Regression (LARS), Convolutional Long Short Term Memory Network (Conv-LSTM), Random Forest (RF), and a pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. We also estimated the source connectivity of the brain signals using a Recurrent Neural Network (RNN) and it correctly estimated the order and sparsity level of the underlying Multivariate Auto-regressive process (MVAR). The time course of the source connectivity was also recovered. Significance: We replace the conventional signal processing pipeline with differentiable modules so that the whole BMI system is adaptive. The study of the decoding system demonstrated a model for BMI that involved a convolutional and recurrent neural network. It integrated the feature extraction pipeline into the convolution and pooling layer and used Long Short Term Memory (LSTM) layer to capture the state transitions. The decoding network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning. The study of the source connectivity estimation demonstrated a generative RNN model that can estimate the un-mixing matrix and the MVAR coefficients of the source activity at the same time. Our method addressed the issue of estimation and inference of the non-stationary MVAR coefficients and the un-mixing matrix in the presence of non-gaussian noise. More importantly, this model can be easily plugged into the BMI decoding system as a differentiable feature extraction module.", "title": "" }, { "docid": "958b0739c5c2d65bbb1cf0b7687610ff", "text": "BACKGROUND\nDexlansoprazole is a new proton pump inhibitor (PPI) with a dual delayed-release system. Both dexlansoprazole and esomeprazole are an enantiomer of lansoprazole and omeprazole respectively. However, there is no head-to-head trial data or indirect comparison analyses between dexlansoprazole and esomeprazole.\n\n\nAIM\nTo compare the efficacy of dexlansoprazole with esomeprazole in healing erosive oesophagitis (EO), the maintenance of healed EO and the treatment of non-erosive reflux disease (NERD).\n\n\nMETHODS\nRandomised Controlled Trials (RCTs) comparing dexlansoprazole or esomeprazole with either placebo or another PPI were systematically reviewed. Random-effect meta-analyses and adjusted indirect comparisons were conducted to compare the treatment effect of dexlansoprazole and esomeprazole using a common comparator. The relative risk (RR) and 95% confidence interval (CI) were calculated.\n\n\nRESULTS\nThe indirect comparisons revealed significant differences in symptom control of heartburn in patients with NERD at 4 weeks. Dexlansoprazole 30 mg was more effective than esomeprazole 20 mg or 40 mg (RR: 2.01, 95% CI: 1.15-3.51; RR: 2.17, 95% CI: 1.39-3.38). However, there were no statistically significant differences between the two drugs in EO healing and maintenance of healed EO. Comparison of symptom control in healed EO was not able to be made due to different definitions used in the RCTs.\n\n\nCONCLUSIONS\nAdjusted indirect comparisons based on currently available RCT data suggested significantly better treatment effect in symptom control of heartburn in patients with NERD for dexlansoprazole against esomeprazole. No statistically significant differences were found in other EO outcomes. However, these study findings need to be interpreted with caution due to small number of studies and other limitations.", "title": "" }, { "docid": "8f618ff8a949c3c3a52e48b43bf82a56", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.03.008 ⇑ Corresponding author. Tel.: +1 714 532 7768; fax E-mail address: bevan@chapman.edu (J.L. Bevan). We consider Facebook unfriending as a form of relationship termination with negative emotional and cognitive consequences. Specifically, ruminative and negative emotional responses are examined via an online survey of adult Facebook users who were unfriended. These responses were positively related to each other and to Facebook intensity. Rumination was positively predicted by using Facebook to connect with existing contacts and was more likely when the unfriender was a close partner. Participants also responded with greater rumination and negative emotion when they knew who unfriended them, when they thought they were unfriended for Facebook-related reasons, and when participants initiated the Facebook friend request. The contribution of these exploratory findings to our growing understanding of negative relational behaviors on Facebook are discussed. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "da3876613301b46645408e474c1f5247", "text": "The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzle r and Thiele 1999) is a relatively recent technique for finding or approximatin g the Pareto-optimal set for multiobjective optimization problems. In different st udies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown very good performance in comparison to other multiobjective evolutionary algorith ms, and therefore it has been a point of reference in various recent investigations, e.g., (Corne, Knowles, and Oates 2000). Furthermore, it has been used in different a pplic tions, e.g., (Lahanas, Milickovic, Baltas, and Zamboglou 2001). In this pap er, an improved version, namely SPEA2, is proposed, which incorporates in cont rast o its predecessor a fine-grained fitness assignment strategy, a density estima tion technique, and an enhanced archive truncation method. The comparison of SPEA 2 with SPEA and two other modern elitist methods, PESA and NSGA-II, on diffe rent test problems yields promising results.", "title": "" }, { "docid": "b74b4bf924478e6a70a2da33bc47ea23", "text": "Most automatic scoring systems use pattern based that requires a lot of hard and tedious work. These systems work in a supervised manner where predefined patterns and scoring rules are generated. This paper presents a different unsupervised approach which deals with students’ answers holistically using text to text similarity. Different String-based and Corpus-based similarity measures were tested separately and then combined to achieve a maximum correlation value of 0.504. The achieved correlation is the best value achieved for unsupervised approach Bag of Words (BOW) when compared to previous work. Keywords-Automatic Scoring; Short Answer Grading; Semantic Similarity; String Similarity; Corpus-Based Similarity.", "title": "" }, { "docid": "b12947614198d639aef0d3a26b83a215", "text": "In the era of mobile Internet, mobile operators are facing pressure on ever-increasing capital expenditures and operating expenses with much less growth of income. Cloud Radio Access Network (C-RAN) is expected to be a candidate of next generation access network techniques that can solve operators' puzzle. In this article, on the basis of a general survey of C-RAN, we present a novel logical structure of C-RAN that consists of a physical plane, a control plane, and a service plane. Compared to traditional architecture, the proposed C-RAN architecture emphasizes the notion of service cloud, service-oriented resource scheduling and management, thus it facilitates the utilization of new communication and computer techniques. With the extensive computation resource offered by the cloud platform, a coordinated user scheduling algorithm and parallel optimum precoding scheme are proposed, which can achieve better performance. The proposed scheme opens another door to design new algorithms matching well with C-RAN architecture, instead of only migrating existing algorithms from traditional architecture to C-RAN.", "title": "" }, { "docid": "ec4e295ea2deb8b372b7f28b8fe8b81e", "text": "Terms such as moral and ethical leadership are used widely in theory, yet little systematic research has related a sociomoral dimension to leadership in organizations. This study investigated whether managers' moral reasoning (n = 132) was associated with the transformational and transactional leadership behaviors they exhibited as perceived by their subordinates (n = 407). Managers completed the Defining Issues Test (J. R. Rest, 1990), whereas their subordinates completed the Multifactor Leadership Questionnaire (B. M. Bass & B. J. Avolio, 1995). Analysis of covariance indicated that managers scoring in the highest group of the moral-reasoning distribution exhibited more transformational leadership behaviors than leaders scoring in the lowest group. As expected, there was no relationship between moral-reasoning group and transactional leadership behaviors. Implications for leadership development are discussed.", "title": "" }, { "docid": "5ef0c7a1e7970c1f37e18447c0c3aaf8", "text": "Most existing high-performance co-segmentation algorithms are usually complicated due to the way of co-labelling a set of images and the requirement to handle quite a few parameters for effective co-segmentation. In this paper, instead of relying on the complex process of co-labelling multiple images, we perform segmentation on individual images but based on a combined saliency map that is obtained by fusing single-image saliency maps of a group of similar images. Particularly, a new multiple image based saliency map extraction, namely geometric mean saliency (GMS) method, is proposed to obtain the global saliency maps. In GMS, we transmit the saliency information among the images using the warping technique. Experiments show that our method is able to outperform state-of-the-art methods on three benchmark co-segmentation datasets.", "title": "" }, { "docid": "4ebd98d8efa7bbd6b7cda9f39701ec15", "text": "Solving statistical learning problems often involves nonconvex optimization. Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory. In this paper, we propose a new analytic paradigm based on diffusion processes to characterize the global dynamics of nonconvex statistical optimization. As a concrete example, we study stochastic gradient descent (SGD) for the tensor decomposition formulation of independent component analysis. In particular, we cast different phases of SGD into diffusion processes, i.e., solutions to stochastic differential equations. Initialized from an unstable equilibrium, the global dynamics of SGD transit over three consecutive phases: (i) an unstable Ornstein-Uhlenbeck process slowly departing from the initialization, (ii) the solution to an ordinary differential equation, which quickly evolves towards the desirable local minimum, and (iii) a stable Ornstein-Uhlenbeck process oscillating around the desirable local minimum. Our proof techniques are based upon Stroock and Varadhan’s weak convergence of Markov chains to diffusion processes, which are of independent interest.", "title": "" }, { "docid": "7abe1fd1b0f2a89bf51447eaef7aa989", "text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.", "title": "" }, { "docid": "e106afaefd5e61f4a5787a7ae0c92934", "text": "Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen. It is a useful technique in cases where an important class of data is under-represented in the training set. This means that the performance of the network will be poor for those classes. In some circumstances, such as medical data and fault detection, it is often precisely the class that is under-represented in the data, the disease or potential fault, that the network should detect. In novelty detection systems the network is trained only on the negative examples where that class is not present, and then detects inputs that do not fits into the model that it has acquired, that it, members of the novel class. This paper reviews the literature on novelty detection in neural networks and other machine learning techniques, as well as providing brief overviews of the related topics of statistical outlier detection and novelty detection in biological organisms.", "title": "" }, { "docid": "64e1953833fe13e0d99928e442d75d11", "text": "We develop a new framework to achieve the goal of Wikipedia entity expansion and attribute extraction from the Web. Our framework takes a few existing entities that are automatically collected from a particular Wikipedia category as seed input and explores their attribute infoboxes to obtain clues for the discovery of more entities for this category and the attribute content of the newly discovered entities. One characteristic of our framework is to conduct discovery and extraction from desirable semi-structured data record sets which are automatically collected from the Web. A semi-supervised learning model with Conditional Random Fields is developed to deal with the issues of extraction learning and limited number of labeled examples derived from the seed entities. We make use of a proximate record graph to guide the semi-supervised learning process. The graph captures alignment similarity among data records. Then the semi-supervised learning process can leverage the unlabeled data in the record set by controlling the label regularization under the guidance of the proximate record graph. Extensive experiments on different domains have been conducted to demonstrate its superiority for discovering new entities and extracting attribute content.", "title": "" }, { "docid": "bb685e028e4f1005b7fe9da01f279784", "text": "Although there are few efficient algorithms in the literature for scientific workflow tasks allocation and scheduling for heterogeneous resources such as those proposed in grid computing context, they usually require a bounded number of computer resources that cannot be applied in Cloud computing environment. Indeed, unlike grid, elastic computing, such asAmazon's EC2, allows users to allocate and release compute resources on-demand and pay only for what they use. Therefore, it is reasonable to assume that the number of resources is infinite. This feature of Clouds has been called âillusion of infiniteresourcesâ. However, despite the proven benefits of using Cloud to run scientific workflows, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. On the other side, the workflow tasks allocation and scheduling have been shown to be NP-complete problems. Thus, it is convenient to use heuristic rather than deterministic algorithm. The objective of this paper is to design an allocation strategy for Cloud computing platform. More precisely, we propose three complementary bi-criteria approaches for scheduling workflows on distributed Cloud resources, taking into account the overall execution time and the cost incurred by using a set of resources.", "title": "" }, { "docid": "1b9806a90e813b9cd452a223b81aa411", "text": "This communication presents a compact substrate-integrated waveguide (SIW) H-plane horn antenna fed by a novel elevated coplanar waveguide (ECPW) structure. First, the wideband characteristic of the SIW horn antenna is achieved through loading a dielectric slab with gradually decreasing dielectric constants, which can be realized through simply perforating different air vias on the extended slab. Second, in order to sustain an efficient feeding for the relatively thick substrate (0.27λ<sub>0</sub>), an additional metal ground is inserted in the middle of the grounded coplanar waveguide (GCPW). Moreover, a triangular-shaped transition structure is placed at the end of the ECPW to smoothly transmit the energy from the thin ECPW to the thick SIW horn antenna. Finally, a prototype is fabricated to validate the proposed concept. Measured results indicate that the proposed horn antenna operates from 17.4 to 24 GHz. Stable radiation patterns can be observed in the whole operating band. The measured results show good accordance with the simulated ones. Above all, the proposed antenna occupies an area of 22 × 56.5 × 4 mm<sup>3</sup> (1.47λ<sub>0</sub> × 3.77λ<sub>0</sub> × 0.27λ<sub>0</sub>), which is much more compact than the previous rectangular waveguide-fed horn antenna (2.33λ<sub>0</sub> × 9.21λ<sub>0</sub> × 0.31λ<sub>0</sub>) (where λ0 is the wavelength at 20 GHz in the free space).", "title": "" }, { "docid": "494c46a56fa1c55b274f1b3c653a358a", "text": "In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: \"privacy-by-policy\" and \"privacy-by-architecture.\" The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems.", "title": "" }, { "docid": "50f09f5b2e579e878f041f136bafe07e", "text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "title": "" }, { "docid": "750a1dd126b0bb90def0bba34dc73cdd", "text": "Skinning of skeletally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance.", "title": "" }, { "docid": "468cdc4decf3871314ce04d6e49f6fad", "text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.", "title": "" }, { "docid": "fb0e9f6f58051b9209388f81e1d018ff", "text": "Because many databases contain or can be embellished with structural information, a method for identifying interesting and repetitive substructures is an essential component to discovering knowledge in such databases. This paper describes the SUBDUE system, which uses the minimum description length (MDL) principle to discover substructures that compress the database and represent structural concepts in the data. By replacing previously-discovered substructures in the data, multiple passes of SUBDUE produce a hierarchical description of the structural regularities in the data. Inclusion of background knowledgeguides SUBDUE toward appropriate substructures for a particular domain or discovery goal, and the use of an inexact graph match allows a controlled amount of deviations in the instance of a substructure concept. We describe the application of SUBDUE to a variety of domains. We also discuss approaches to combining SUBDUE with non-structural discovery systems.", "title": "" } ]
scidocsrr
7d8a7c9ba17b7b808babac48ff7992c3
Graph-boosted convolutional neural networks for semantic segmentation
[ { "docid": "047949b0dba35fb11f9f3b716893701d", "text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.", "title": "" } ]
[ { "docid": "1db6982c56d7a46c30dde0df54faa5d5", "text": "Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed.", "title": "" }, { "docid": "20d4f450256e0623bb2deac19e14becc", "text": "This paper presents the Self-Sorting Map (SSM), a novel algorithm for organizing and presenting multimedia data. Given a set of data items and a dissimilarity measure between each pair of them, the SSM places each item into a unique cell of a structured layout, where the most related items are placed together and the unrelated ones are spread apart. The algorithm integrates ideas from dimension reduction, sorting, and data clustering algorithms. Instead of solving the continuous optimization problem that other dimension reduction approaches do, the SSM transforms it into a discrete labeling problem. As a result, it can organize a set of data into a structured layout without overlap, providing a simple and intuitive presentation. The algorithm is designed for sorting all data items in parallel, making it possible to arrange millions of items in seconds. Experiments on different types of data demonstrate the SSM's versatility in a variety of applications, ranging from positioning city names by proximities to presenting images according to visual similarities, to visualizing semantic relatedness between Wikipedia articles.", "title": "" }, { "docid": "a9dfddc3812be19de67fc4ffbc2cad77", "text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.", "title": "" }, { "docid": "dc8af68ed9bbfd8e24c438771ca1d376", "text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.", "title": "" }, { "docid": "9c8e773dde5e999ac31a1a4bd279c24d", "text": "The efficiency of wireless power transfer (WPT) systems is highly dependent on the load, which may change in a wide range in field applications. Besides, the detuning of WPT systems caused by the component tolerance and aging of inductors and capacitors can also decrease the system efficiency. In order to track the maximum system efficiency under varied loads and detuning conditions in real time, an active single-phase rectifier (ASPR) with an auxiliary measurement coil (AMC) and its corresponding control method are proposed in this paper. Both the equivalent load impedance and the output voltage can be regulated by the ASPR and the inverter, separately. First, the fundamental harmonic analysis model is established to analyze the influence of the load and the detuning on the system efficiency. Second, the soft-switching conditions and the equivalent input impedance of ASPR with different phase shifts and pulse widths are investigated in detail. Then, the analysis of the AMC and the maximum efficiency control strategy are provided in detail. Finally, an 800-W prototype is set up to validate the performance of the proposed method. The experimental results show that with 10% tolerance of the resonant capacitor in the receiver side, the system efficiency with the proposed approach reaches 91.7% at rated 800-W load and 91.1% at 300-W light load, which has an improvement by 2% and 10% separately compared with the traditional diode rectifier.", "title": "" }, { "docid": "d1f89e14ff9382294b2597233b06b433", "text": "Online referrals have become an important mechanism in leveraging consumers’ social networks to spread firms’ promotional campaigns and thus attract new customers. However, despite a robust understanding of the benefits and drivers of consumer referrals, only minimal attention has been paid towards the potential of classical promotional tactics in influencing referral behavior. Therefore, this study examines scarcity and social proof, two promotional cues which are linked to extant referral literature and are of great practical relevance, in the context of a randomized online experiment with the German startup Blinkist. Our analysis reveals that scarcity cues affect consumers' referral propensity regardless of the presence of social proof cues, but that social proof cues amplify scarcity’s effect on consumer referral propensity. We demonstrate that consumers’ perceptions of offer value drive the impact of scarcity on referral likelihood and illuminate how social proof moderates this mediating effect.", "title": "" }, { "docid": "3364f6fab787e3dbcc4cb611960748b8", "text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.", "title": "" }, { "docid": "92600ef3d90d5289f70b10ccedff7a81", "text": "In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.", "title": "" }, { "docid": "127406000c2ede6517513bfa21747431", "text": "These are exciting times for cancer immunotherapy. After many years of disappointing results, the tide has finally changed and immunotherapy has become a clinically validated treatment for many cancers. Immunotherapeutic strategies include cancer vaccines, oncolytic viruses, adoptive transfer of ex vivo activated T and natural killer cells, and administration of antibodies or recombinant proteins that either costimulate cells or block the so-called immune checkpoint pathways. The recent success of several immunotherapeutic regimes, such as monoclonal antibody blocking of cytotoxic T lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD1), has boosted the development of this treatment modality, with the consequence that new therapeutic targets and schemes which combine various immunological agents are now being described at a breathtaking pace. In this review, we outline some of the main strategies in cancer immunotherapy (cancer vaccines, adoptive cellular immunotherapy, immune checkpoint blockade, and oncolytic viruses) and discuss the progress in the synergistic design of immune-targeting combination therapies.", "title": "" }, { "docid": "aa3abc75e37ed6de703d05c274806220", "text": "We conducted an extensive set of empirical analyses to examine the effect of the number of events per variable (EPV) on the relative performance of three different methods for assessing the predictive accuracy of a logistic regression model: apparent performance in the analysis sample, split-sample validation, and optimism correction using bootstrap methods. Using a single dataset of patients hospitalized with heart failure, we compared the estimates of discriminatory performance from these methods to those for a very large independent validation sample arising from the same population. As anticipated, the apparent performance was optimistically biased, with the degree of optimism diminishing as the number of events per variable increased. Differences between the bootstrap-corrected approach and the use of an independent validation sample were minimal once the number of events per variable was at least 20. Split-sample assessment resulted in too pessimistic and highly uncertain estimates of model performance. Apparent performance estimates had lower mean squared error compared to split-sample estimates, but the lowest mean squared error was obtained by bootstrap-corrected optimism estimates. For bias, variance, and mean squared error of the performance estimates, the penalty incurred by using split-sample validation was equivalent to reducing the sample size by a proportion equivalent to the proportion of the sample that was withheld for model validation. In conclusion, split-sample validation is inefficient and apparent performance is too optimistic for internal validation of regression-based prediction models. Modern validation methods, such as bootstrap-based optimism correction, are preferable. While these findings may be unsurprising to many statisticians, the results of the current study reinforce what should be considered good statistical practice in the development and validation of clinical prediction models.", "title": "" }, { "docid": "28e9bb0eef126b9969389068b6810073", "text": "This paper presents the task specifications for designing a novel Insertable Robotic Effectors Platform (IREP) with integrated stereo vision and surgical intervention tools for Single Port Access Surgery (SPAS). This design provides a compact deployable mechanical architecture that may be inserted through a single Ø15 mm access port. Dexterous surgical intervention and stereo vision are achieved via the use of two snake-like continuum robots and two controllable CCD cameras. Simulations and dexterity evaluation of our proposed design are compared to several design alternatives with different kinematic arrangements. Results of these simulations show that dexterity is improved by using an independent revolute joint at the tip of a continuum robot instead of achieving distal rotation by transmission of rotation about the backbone of the continuum robot. Further, it is shown that designs with two robotic continuum robots as surgical arms have diminished dexterity if the bases of these arms are close to each other. This result justifies our design and points to ways of improving the performance of existing designs that use continuum robots as surgical arms.", "title": "" }, { "docid": "7aa6b9cb3a7a78ec26aff130a1c9015a", "text": "As critical infrastructures in the Internet, data centers have evolved to include hundreds of thousands of servers in a single facility to support dataand/or computing-intensive applications. For such large-scale systems, it becomes a great challenge to design an interconnection network that provides high capacity, low complexity, low latency and low power consumption. The traditional approach is to build a hierarchical packet network using switches and routers. This approach suffers from limited scalability in the aspects of power consumption, wiring and control complexity, and delay caused by multi-hop store-andforwarding. In this paper we tackle the challenge by designing a novel switch architecture that supports direct interconnection of huge number of server racks and provides switching capacity at the level of Petabit/s. Our design combines the best features of electronics and optics. Exploiting recent advances in optics, we propose to build a bufferless optical switch fabric that includes interconnected arrayed waveguide grating routers (AWGRs) and tunable wavelength converters (TWCs). The optical fabric is integrated with electronic buffering and control to perform highspeed switching with nanosecond-level reconfiguration overhead. In particular, our architecture reduces the wiring complexity from O(N) to O(sqrt(N)). We design a practical and scalable scheduling algorithm to achieve high throughput under various traffic load. We also discuss implementation issues to justify the feasibility of this design. Simulation shows that our design achieves good throughput and delay performance.", "title": "" }, { "docid": "b51a1df32ce34ae3f1109a9053b4bc1f", "text": "Nowadays many automobile manufacturers are switching to Electric Power Steering (EPS) for its advantages on performance and cost. In this paper, a mathematical model of a column type EPS system is established, and its state-space expression is constructed. Then three different control methods are implemented and performance, robustness and disturbance rejection properties of the EPS control systems are investigated. The controllers are tested via simulation and results show a modified Linear Quadratic Gaussian (LQG) controller can track the characteristic curve well and effectively attenuate external disturbances.", "title": "" }, { "docid": "2331098bd8099a8dba7bab10c9322b5f", "text": "Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.", "title": "" }, { "docid": "f92d8e163f3f4665bafaa2d662a3fb57", "text": "Mobile cloud computing utilizing cloudlet is an emerging technology to improve the quality of mobile services. In this paper, to better overcome the main bottlenecks of the computation capability of cloudlet and the wireless bandwidth between mobile devices and cloudlet, we consider the multi-resource allocation problem for the cloudlet environment with resource-intensive and latency-sensitive mobile applications. The proposed multi-resource allocation strategy enhances the quality of mobile cloud service, in terms of the system throughput (the number of admitted mobile applications) and the service latency. We formulate the resource allocation model as a semi-Markov decision process under the average cost criterion, and solve the optimization problem using linear programming technology. Through maximizing the long-term reward while meeting the system requirements of the request blocking probability and service time latency, an optimal resource allocation policy is calculated. From simulation result, it is indicated that the system adaptively adjusts the allocation policy about how much resource to allocate and whether to utilize the distant cloud according to the traffic of mobile service requests and the availability of the resource in the system. Our algorithm outperforms greedy admission control over a broad range of environments.", "title": "" }, { "docid": "2d2d4d439021ee8665ddc3d97d879214", "text": "We present the use of an oblique angle physical vapor deposition OAPVDd technique with substrate rotation to obtain conformal thin films with enhanced step coverage on patterned surfaces. We report the results of rutheniumsRud films sputter deposited on trench structures with aspect ratio ,2 and show that OAPVD with an incidence angle less that 30° with respect to the substrate surface normal one can create a more conformal coating without overhangs and voids compared to that obtained by normal incidence deposition. A simple geometrical shadowing effect is presented to explain the results. The technique has the potential of extending the present PVD technique to future chip interconnect fabrication. ©2005 American Institute of Physics . fDOI: 10.1063/1.1937476 g", "title": "" }, { "docid": "688dc1cc592e1fcd60445e640d8294d8", "text": "Techniques for high dynamic range (HDR) imaging make it possible to capture and store an increased range of luminances and colors as compared to what can be achieved with a conventional camera. This high amount of image information can be used in a wide range of applications, such as HDR displays, image-based lighting, tone-mapping, computer vision, and post-processing operations. HDR imaging has been an important concept in research and development for many years. Within the last couple of years it has also reached the consumer market, e.g. with TV displays that are capable of reproducing an increased dynamic range and peak luminance. This thesis presents a set of technical contributions within the field of HDR imaging. First, the area of HDR video tone-mapping is thoroughly reviewed, evaluated and developed upon. A subjective comparison experiment of existing methods is performed, followed by the development of novel techniques that overcome many of the problems evidenced by the evaluation. Second, a largescale objective comparison is presented, which evaluates existing techniques that are involved in HDR video distribution. From the results, a first open-source HDR video codec solution, Luma HDRv, is built using the best performing techniques. Third, a machine learning method is proposed for the purpose of reconstructing an HDR image from one single-exposure low dynamic range (LDR) image. The method is trained on a large set of HDR images, using recent advances in deep learning, and the results increase the quality and performance significantly as compared to existing algorithms. The areas for which contributions are presented can be closely inter-linked in the HDR imaging pipeline. Here, the thesis work helps in promoting efficient and high-quality HDR video distribution and display, as well as robust HDR image reconstruction from a single conventional LDR image.", "title": "" }, { "docid": "aa356ad4168fc4a55ca39cecc818c86e", "text": "A novel augmented complex-valued common spatial pattern (CSP) algorithm is introduced in order to cater for general complex signals with noncircular probability distributions. This is a typical case in multichannel electroencephalogram (EEG), due to the power difference or correlation between the data channels, yet current methods only cater for a very restrictive class of circular data. The proposed complex-valued CSP algorithms account for the generality of complex noncircular data, by virtue of the use of augmented complex statistics and the strong-uncorrelating transform (SUT). Depending on the degree of power difference of complex signals, the analysis and simulations show that the SUT based algorithm maximizes the inter-class difference between two motor imagery tasks. Simulations on both synthetic noncircular sources and motor imagery experiments using real-world EEG support the approach.", "title": "" }, { "docid": "22e677f2073599d6ffc9eadf6f3a833f", "text": "Statistical inference in psychology has traditionally relied heavily on p-value significance testing. This approach to drawing conclusions from data, however, has been widely criticized, and two types of remedies have been advocated. The first proposal is to supplement p values with complementary measures of evidence, such as effect sizes. The second is to replace inference with Bayesian measures of evidence, such as the Bayes factor. The authors provide a practical comparison of p values, effect sizes, and default Bayes factors as measures of statistical evidence, using 855 recently published t tests in psychology. The comparison yields two main results. First, although p values and default Bayes factors almost always agree about what hypothesis is better supported by the data, the measures often disagree about the strength of this support; for 70% of the data sets for which the p value falls between .01 and .05, the default Bayes factor indicates that the evidence is only anecdotal. Second, effect sizes can provide additional evidence to p values and default Bayes factors. The authors conclude that the Bayesian approach is comparatively prudent, preventing researchers from overestimating the evidence in favor of an effect.", "title": "" }, { "docid": "265e9de6c65996e639fd265be170e039", "text": "Topical crawling is a young and creative area of research that holds the promise of benefiting from several sophisticated data mining techniques. The use of classification algorithms to guide topical crawlers has been sporadically suggested in the literature. No systematic study, however, has been done on their relative merits. Using the lessons learned from our previous crawler evaluation studies, we experiment with multiple versions of different classification schemes. The crawling process is modeled as a parallel best-first search over a graph defined by the Web. The classifiers provide heuristics to the crawler thus biasing it towards certain portions of the Web graph. Our results show that Naive Bayes is a weak choice for guiding a topical crawler when compared with Support Vector Machine or Neural Network. Further, the weak performance of Naive Bayes can be partly explained by extreme skewness of posterior probabilities generated by it. We also observe that despite similar performances, different topical crawlers cover subspaces on the Web with low overlap.", "title": "" } ]
scidocsrr
a4bc9b166b4c926585f670760a3169e1
Why people buy virtual items in virtual worlds with real money
[ { "docid": "bd13f54cd08fe2626fe8de4edce49197", "text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "06b4bfebe295e3dceadef1a842b2e898", "text": "Constant changes in the economic environment, where globalization and the development of the knowledge economy act as drivers, are systematically pushing companies towards the challenge of accessing external markets. Web localization constitutes a new field of study and professional intervention. From the translation perspective, localization equates to the website being adjusted to the typological, discursive and genre conventions of the target culture, adapting that website to a different language and culture. This entails much more than simply translating the content of the pages. The content of a webpage is made up of text, images and other multimedia elements, all of which have to be translated and subjected to cultural adaptation. A case study has been carried out to analyze the current presence of localization within Spanish SMEs from the chemical sector. Two types of indicator have been established for evaluating the sample: indicators for evaluating company websites (with a Likert scale from 0–4) and indicators for evaluating web localization (0–2 scale). The results show overall website quality is acceptable (2.5 points out of 4). The higher rating has been obtained by the system quality (with 2.9), followed by information quality (2.7 points) and, lastly, service quality (1.9 points). In the web localization evaluation, the contact information aspects obtain 1.4 points, the visual aspect 1.04, and the navigation aspect was the worse considered (0.37). These types of analysis facilitate the establishment of practical recommendations aimed at SMEs in order to increase their international presence through the localization of their websites.", "title": "" }, { "docid": "570eca9884edb7e4a03ed95763be20aa", "text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.", "title": "" }, { "docid": "a72e4785509d85702096fb304e9fdac5", "text": "Cross-lingual adaptation aims to learn a prediction model in a label-scarce target language by exploiting labeled data from a labelrich source language. An effective crosslingual adaptation system can substantially reduce the manual annotation effort required in many natural language processing tasks. In this paper, we propose a new cross-lingual adaptation approach for document classification based on learning cross-lingual discriminative distributed representations of words. Specifically, we propose to maximize the loglikelihood of the documents from both language domains under a cross-lingual logbilinear document model, while minimizing the prediction log-losses of labeled documents. We conduct extensive experiments on cross-lingual sentiment classification tasks of Amazon product reviews. Our experimental results demonstrate the efficacy of the proposed cross-lingual adaptation approach.", "title": "" }, { "docid": "9d5ca4c756b63c60f6a9d6308df63ea3", "text": "This paper presents recent advances in the project: development of a convertible unmanned aerial vehicle (UAV). This aircraft is able to change its flight configuration from hover to level flight and vice versa by means of a transition maneuver, while maintaining the aircraft in flight. For this purpose a nonlinear control strategy based on Lyapunov design is given. Numerical results are presented showing the effectiveness of the proposed approach.", "title": "" }, { "docid": "1202e46fcc6c2f88b81fcf153ed4fd7d", "text": "Recently, several high dimensional classification methods have been proposed to automatically discriminate between patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and elderly controls (CN) based on T1-weighted MRI. However, these methods were assessed on different populations, making it difficult to compare their performance. In this paper, we evaluated the performance of ten approaches (five voxel-based methods, three methods based on cortical thickness and two methods based on the hippocampus) using 509 subjects from the ADNI database. Three classification experiments were performed: CN vs AD, CN vs MCIc (MCI who had converted to AD within 18 months, MCI converters - MCIc) and MCIc vs MCInc (MCI who had not converted to AD within 18 months, MCI non-converters - MCInc). Data from 81 CN, 67 MCInc, 39 MCIc and 69 AD were used for training and hyperparameters optimization. The remaining independent samples of 81 CN, 67 MCInc, 37 MCIc and 68 AD were used to obtain an unbiased estimate of the performance of the methods. For AD vs CN, whole-brain methods (voxel-based or cortical thickness-based) achieved high accuracies (up to 81% sensitivity and 95% specificity). For the detection of prodromal AD (CN vs MCIc), the sensitivity was substantially lower. For the prediction of conversion, no classifier obtained significantly better results than chance. We also compared the results obtained using the DARTEL registration to that using SPM5 unified segmentation. DARTEL significantly improved six out of 20 classification experiments and led to lower results in only two cases. Overall, the use of feature selection did not improve the performance but substantially increased the computation times.", "title": "" }, { "docid": "96e56dcf3d38c8282b5fc5c8ae747a66", "text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.", "title": "" }, { "docid": "a45109840baf74c61b5b6b8f34ac81d5", "text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.", "title": "" }, { "docid": "97c22bf7654160e53c24eee7ebe97333", "text": "‘‘Sexting’’ refers to sending and receiving sexually suggestive images, videos, or texts on cell phones. As a means for maintaining or initiating a relationship, sexting behavior and attitudes may be understood through adult attachment theory. One hundred and twenty-eight participants (M = 22 and F = 106), aged 18–30 years, completed an online questionnaire about their adult attachment styles and sexting behavior and attitudes. Attachment anxiety predicted sending texts that solicit sexual activity for those individuals in relationships. Attachment anxiety also predicted positive attitudes towards sexting such as accepting it as normal, that it will enhance the relationship, and that partners will expect sexting. Sexting may be a novel form for expressing attachment anxiety. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9f3803ae394163e32fe81784b671de92", "text": "A smart community is a distributed system consisting of a set of smart homes which utilize the smart home scheduling techniques to enable customers to automatically schedule their energy loads targeting various purposes such as electricity bill reduction. Smart home scheduling is usually implemented in a decentralized fashion inside a smart community, where customers compete for the community level renewable energy due to their relatively low prices. Typically there exists an aggregator as a community wide electricity policy maker aiming to minimize the total electricity bill among all customers. This paper develops a new renewable energy aware pricing scheme to achieve this target. We establish the proof that under certain assumptions the optimal solution of decentralized smart home scheduling is equivalent to that of the centralized technique, reaching the theoretical lower bound of the community wide total electricity bill. In addition, an advanced cross entropy optimization technique is proposed to compute the pricing scheme of renewable energy, which is then integrated in smart home scheduling. The simulation results demonstrate that our pricing scheme facilitates the reduction of both the community wide electricity bill and individual electricity bills compared to the uniform pricing. In particular, the community wide electricity bill can be reduced to only 0.06 percent above the theoretic lower bound.", "title": "" }, { "docid": "5ee610b61deefffc1b054d908587b406", "text": "Self-shaping of curved structures, especially those involving flexible thin layers, is attracting increasing attention because of their broad potential applications in, e.g., nanoelectromechanical andmicroelectromechanical systems, sensors, artificial skins, stretchable electronics, robotics, and drug delivery. Here, we provide an overview of recent experimental, theoretical, and computational studies on the mechanical selfassembly of strain-engineered thin layers, with an emphasis on systems in which the competition between bending and stretching energy gives rise to a variety of deformations, such as wrinkling, rolling, and twisting. We address the principle of mechanical instabilities, which is often manifested in wrinkling or multistability of strain-engineered thin layers. The principles of shape selection and transition in helical ribbons are also systematically examined. We hope that a more comprehensive understanding of the mechanical principles underlying these rich phenomena can foster the development of techniques for manufacturing functional three-dimensional structures on demand for a broad spectrum of engineering applications.", "title": "" }, { "docid": "2b3c9b9f92582af41fcde0186c9bd0f6", "text": "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.", "title": "" }, { "docid": "10f31578666795a3b1ad852929769fc5", "text": "CNNs have been successfully used in audio, image and text classification, analysis and generation [12,17,18], whereas the RNNs with LSTM cells [5,6] have been widely adopted for solving sequence transduction problems such as language modeling and machine translation [19,3,5]. The RNN models typically align the element positions of the input and output sequences to steps in computation time for generating the sequenced hidden states, with each depending on the current element and the previous hidden state. Such operations are inherently sequential which precludes parallelization and becomes the performance bottleneck. This situation has motivated researchers to extend the easily parallelizable CNN models for more efficient sequence-to-sequence mapping. Once such efforts can deliver satisfactory quality, the usage of CNN in deep learning would be significantly broadened.", "title": "" }, { "docid": "0974cee877ff2fecfda81d48012c07d3", "text": "New method of blinking detection is proposed. The utmost important of blinking detection method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detection method by measuring the distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features such as line, arch, and other shapes. After two of eye arcs are detected, we measure the distance between arcs of eye by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.", "title": "" }, { "docid": "8503c9989f9706805a74bbd5c964ab07", "text": "Since the phenomenon of cloud computing was proposed, there is an unceasing interest for research across the globe. Cloud computing has been seen as unitary of the technology that poses the next-generation computing revolution and rapidly becomes the hottest topic in the field of IT. This fast move towards Cloud computing has fuelled concerns on a fundamental point for the success of information systems, communication, virtualization, data availability and integrity, public auditing, scientific application, and information security. Therefore, cloud computing research has attracted tremendous interest in recent years. In this paper, we aim to precise the current open challenges and issues of Cloud computing. We have discussed the paper in three-fold: first we discuss the cloud computing architecture and the numerous services it offered. Secondly we highlight several security issues in cloud computing based on its service layer. Then we identify several open challenges from the Cloud computing adoption perspective and its future implications. Finally, we highlight the available platforms in the current era for cloud research and development.", "title": "" }, { "docid": "5546cbb6fac77d2d9fffab8ba0a50ed8", "text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0a8150abf09c6551e4cd771d12ed66c1", "text": "Sarcasm presents a negative meaning with positive expressions and is a non-literalistic expression. Sarcasm detection is an important task because it contributes directly to the improvement of the accuracy of sentiment analysis tasks. In this study, we propose a extraction method of sarcastic sentences in product reviews. First, we analyze sarcastic sentences in product reviews and classify the sentences into 8 classes by focusing on evaluation expressions. Next, we generate classification rules for each class and use them to extract sarcastic sentences. Our method consists of three stage, judgment processes based on rules for 8 classes, boosting rules and rejection rules. In the experiment, we compare our method with a baseline based on a simple rule. The experimental result shows the effectiveness of our method.", "title": "" }, { "docid": "a289829cb63b56280a1e06f69c6670a9", "text": "This article presents an overview of the ability model of emotional intelligence and includes a discussion about how and why the concept became useful in both educational and workplace settings. We review the four underlying emotional abilities comprising emotional intelligence and the assessment tools that that have been developed to measure the construct. A primary goal is to provide a review of the research describing the correlates of emotional intelligence. We describe what is known about how emotionally intelligent people function both intraand interpersonally and in both academic and workplace settings. The facts point in one direction: The job offer you have in hand is perfect – great salary, ideal location, and tremendous growth opportunities. Yet, there is something that makes you feel uneasy about resigning from your current position and moving on. What will you do? Ignore the feeling and choose what appears to be the logical path, or go with your gut and risk disappointing your family? Or, might you consider both your thoughts and feelings about the job in order to make the decision? Solving problems and making wise decisions using both thoughts and feelings or logic and intuition is a part of what we refer to as emotional intelligence (Mayer & Salovey, 1997; Salovey & Mayer, 1990). Linking emotions and intelligence was relatively novel when first introduced in a theoretical model about twenty years ago (Salovey & Mayer, 1990; but see Gardner, 1983 ⁄1993). Among the many questions posed by both researchers and laypersons alike were: Is emotional intelligence an innate, nonmalleable mental ability? Can it be acquired with instruction and training? Is it a new intelligence or just the repackaging of existing constructs? How can it be measured reliably and validly? What does the existence of an emotional intelligence mean in everyday life? In what ways does emotional intelligence affect mental health, relationships, daily decisions, and academic and workplace performance? In this article, we provide an overview of the theory of emotional intelligence, including a brief discussion about how and why the concept has been used in both educational and workplace settings. Because the field is now replete with articles, books, and training manuals on the topic – and because the definitions, claims, and measures of emotional intelligence have become extremely diverse – we also clarify definitional and measurement issues. A final goal is to provide an up-to-date review of the research describing what the lives of emotionally intelligent people ‘look like’ personally, socially, academically, and in the workplace. Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd What is Emotional Intelligence? Initial conception of emotional intelligence Emotional intelligence was described formally by Salovey and Mayer (1990). They defined it as ‘the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions’ (p. 189). They also provided an initial empirical demonstration of how an aspect of emotional intelligence could be measured as a mental ability (Mayer, DiPaolo, & Salovey, 1990). In both articles, emotional intelligence was presented as a way to conceptualize the relation between cognition and affect. Historically, ‘emotion’ and ‘intelligence’ were viewed as being in opposition to one another (Lloyd, 1979). How could one be intelligent about the emotional aspects of life when emotions derail individuals from achieving their goals (e.g., Young, 1943)? The theory of emotional intelligence suggested the opposite: emotions make cognitive processes adaptive and individuals can think rationally about emotions. Emotional intelligence is an outgrowth of two areas of psychological research that emerged over forty years ago. The first area, cognition and affect, involved how cognitive and emotional processes interact to enhance thinking (Bower, 1981; Isen, Shalker, Clark, & Karp, 1978; Zajonc, 1980). Emotions like anger, happiness, and fear, as well as mood states, preferences, and bodily states, influence how people think, make decisions, and perform different tasks (Forgas & Moylan, 1987; Mayer & Bremer, 1985; Salovey & Birnbaum, 1989). The second was an evolution in models of intelligence itself. Rather than viewing intelligence strictly as how well one engaged in analytic tasks associated with memory, reasoning, judgment, and abstract thought, theorists and investigators began considering intelligence as a broader array of mental abilities (e.g., Cantor & Kihlstrom, 1987; Gardner, 1983 ⁄1993; Sternberg, 1985). Sternberg (1985), for example, urged educators and scientists to place an emphasis on creative abilities and practical knowledge that could be acquired through careful navigation of one’s everyday environment. Gardner’s (1983) ‘personal intelligences,’ including the capacities involved in accessing one’s own feeling life (intrapersonal intelligence) and the ability to monitor others’ emotions and mood (interpersonal intelligence), provided a compatible backdrop for considering emotional intelligence as a viable construct. Popularization of emotional intelligence The term ‘emotional intelligence’ was mostly unfamiliar to researchers and the general public until Goleman (1995) wrote the best-selling trade book, Emotional Intelligence: Why it can Matter More than IQ. The book quickly caught the eye of the media, public, and researchers. In it, Goleman described how scientists had discovered a connection between emotional competencies and prosocial behavior; he also declared that emotional intelligence was both an answer to the violence plaguing our schools and ‘as powerful and at times more powerful than IQ’ in predicting success in life (Goleman, 1995; p. 34). Both in the 1995 book and in a later book focusing on workplace applications of emotional intelligence (Goleman, 1998), Goleman described the construct as an array of positive attributes including political awareness, self-confidence, conscientiousness, and achievement motives rather than focusing only on an intelligence that could help individuals solve problems effectively (Brackett & Geher, 2006). Goleman’s views on emotional intelligence, in part because they were articulated for ⁄ to the general public, extended Emotional Intelligence 89 a 2011 The Authors Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd beyond the empirical evidence that was available (Davies, Stankov, & Roberts, 1998; Hedlund & Sternberg, 2000; Mayer & Cobb, 2000). Yet, people from all professions – educators, psychologists, human resource professionals, and corporate executives – began to incorporate emotional intelligence into their daily vernacular and professional practices. Definitions and measures of emotional intelligence varied widely, with little consensus about what emotional intelligence is and is not. Alternative models of emotional intelligence Today, there are two scientific approaches to emotional intelligence. They can be characterized as the ability model and mixed models (Mayer, Caruso, & Salovey, 2000). The ability model views emotional intelligence as a standard intelligence and argues that the construct meets traditional criteria for an intelligence (Mayer, Roberts, & Barsade, 2008b; Mayer & Salovey, 1997; Mayer, Salovey, & Caruso, 2008a). Proponents of the ability model measure emotional intelligence as a mental ability with performance assessments that have a criterion of correctness (i.e., there are better and worse answers, which are determined using complex scoring algorithms). Mixed models are so called because they mix the ability conception with personality traits and competencies such as optimism, self-esteem, and emotional self-efficacy (see Cherniss, 2010, for a review). Proponents of this approach use self-report instruments as opposed to performance assessments to measure emotional intelligence (i.e., instead of asking people to demonstrate how they perceive an emotional expression accurately, self-report measures ask people to judge and report how good they are at perceiving others’ emotions accurately). There has been a debate about the ideal method to measure emotional intelligence. On the surface, self-report (or self-judgment) scales are desirable: they are less costly, easier to administer, and take considerably less time to complete than performance tests (Brackett, Rivers, Shiffman, Lerner, & Salovey, 2006). However, it is well known that self-report measures are problematic because respondents can provide socially desirable responses rather than truthful ones, or respondents may not actually know how good they are at emotion-based tasks – to whom do they compare themselves (e.g., DeNisi & Shaw, 1977; Paulhus, Lysy, & Yik, 1998)? As they apply to emotional intelligence, selfreport measures are related weakly to performance assessments and lack discriminant validity from existing measures of personality (Brackett & Mayer, 2003; Brackett et al., 2006). In a meta-analysis of 13 studies that compared performance tests (e.g., Mayer, Salovey, & Caruso, 2002) and self-report scales (e.g., EQ-i; Bar-On, 1997), Van Rooy, Viswesvaran, and Pluta (2005) reported that performance tests were relatively distinct from self-report measures (r = 0.14). Even when a self-report measure is designed to map onto performance tests, correlations are very low (Brackett et al., 2006a). Finally, self-report measures of emotional intelligence are more susceptible to faking than performance tests (Day & Carroll, 2008). For the reasons described in this section, we assert that the ability-based definition and performance-based measure", "title": "" } ]
scidocsrr
846e8e63f60f546d8538c1daad27bd1a
Hand gesture-based visual user interface for infotainment
[ { "docid": "563c0f48ce83eddc15cd2f3d88c7efda", "text": "This paper presents investigations into the role of computer-vision technology in developing safer automobiles. We consider vision systems, which cannot only look out of the vehicle to detect and track roads and avoid hitting obstacles or pedestrians but simultaneously look inside the vehicle to monitor the attentiveness of the driver and even predict her intentions. In this paper, a systems-oriented framework for developing computer-vision technology for safer automobiles is presented. We will consider three main components of the system: environment, vehicle, and driver. We will discuss various issues and ideas for developing models for these main components as well as activities associated with the complex task of safe driving. This paper includes a discussion of novel sensory systems and algorithms for capturing not only the dynamic surround information of the vehicle but also the state, intent, and activity patterns of drivers", "title": "" } ]
[ { "docid": "169ea06b2ec47b77d01fe9a4d4f8a265", "text": "One of the main challenges in security today is defending against malware attacks. As trends and anecdotal evidence show, preventing these attacks, regardless of their indiscriminate or targeted nature, has proven difficult: intrusions happen and devices get compromised, even at security-conscious organizations. As a consequence, an alternative line of work has focused on detecting and disrupting the individual steps that follow an initial compromise and are essential for the successful progression of the attack. In particular, several approaches and techniques have been proposed to identify the command and control (C8C) channel that a compromised system establishes to communicate with its controller.\n A major oversight of many of these detection techniques is the design’s resilience to evasion attempts by the well-motivated attacker. C8C detection techniques make widespread use of a machine learning (ML) component. Therefore, to analyze the evasion resilience of these detection techniques, we first systematize works in the field of C8C detection and then, using existing models from the literature, go on to systematize attacks against the ML components used in these approaches.", "title": "" }, { "docid": "97582a93ef3977fab8b242a1ce102459", "text": "We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.", "title": "" }, { "docid": "c8f1d563987245bcb052e4b2c3937ec9", "text": "Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization. We address the instability of the large mini-batch training with batch size control. We address the overhead of the gradient synchronization with 2D-Torus all-reduce. Specifically, 2D-Torus all-reduce arranges GPUs in a logical 2D grid and performs a series of collective operation in different orientations. These two techniques are implemented with Neural Network Libraries (NNL) 1 . We have successfully trained ImageNet/ResNet-50 in 224 seconds without significant accuracy loss on ABCI cluster.", "title": "" }, { "docid": "5cfaec0f198065bb925a1fb4ffb53f60", "text": "In the emerging inter-disciplinary field of art and image processing, algorithms have been developed to assist the analysis of art work. In most applications, especially brush stroke analysis, high resolution digital images of paintings are required to capture subtle patterns and details in the high frequency range of the spectrum. Algorithms have been developed to learn styles of painters from their digitized paintings to help identify authenticity of controversial paintings. However, high quality testing datasets containing both original and forgery are limited to confidential image files provided by museums, which is not publicly available, and a small sets of original/copy paintings painted by the same artist, where copies were deferred to two weeks after the originals were finished. Up to date, no synthesized painting by computers from a real painting has been used as a negative test case, mainly due to the limitation of prevailing style transfer algorithms. There are two main types of style transfer algorithms, either transferring the tone (color, contrast, saturation, etc.) of an image, preserving its patterns and details, or distorting the texture uniformly of an image to create “style”. In this paper, we are interested in a higher level of style transfer, particularly, transferring a source natural image (e.g. a photo) to a high resolution painting given a reference painting of similar object. The transferred natural image would have a similar presentation of the original object to that of the reference painting. In general, an object is painted in a different style of brush strokes than that of the background, hence the desired style transferring algorithm should be able to recognize the object in the source natural image and transfer brush stroke styles in the reference painting in a content-aware way such that the styles of the foreground and the background, and moreover different parts of the foreground in the transferred image, are consistent to that in the reference painting. Recently, an algorithm based on deep convolutional neural network has been developed to transfer artistic style from an art painting to a photo [2]. Successful as it is in transferring styles from impressionist paintings of artists such as Vincent van Gogh to photos of various scenes, the algorithm is prone to distorting the structure of the content in the source image and introducing artifacts/new", "title": "" }, { "docid": "a84d4d2815a6d870055514f633770c80", "text": "BACKGROUND\nNeuroimaging studies have shown that major depressive disorder (MDD) is accompanied by structural and functional abnormalities in specific brain regions and connections; yet, little is known about alterations of the topological organization of whole-brain networks in MDD patients.\n\n\nMETHODS\nThirty drug-naive, first-episode MDD patients and 63 healthy control subjects underwent a resting-state functional magnetic resonance imaging scan. The whole-brain functional networks were constructed by thresholding partial correlation matrices of 90 brain regions, and their topological properties (e.g., small-world, efficiency, and nodal centrality) were analyzed using graph theory-based approaches. Nonparametric permutation tests were further used for group comparisons of topological metrics.\n\n\nRESULTS\nBoth the MDD and control groups showed small-world architecture in brain functional networks, suggesting a balance between functional segregation and integration. However, compared with control subjects, the MDD patients showed altered quantitative values in the global properties, characterized by lower path length and higher global efficiency, implying a shift toward randomization in their brain networks. The MDD patients exhibited increased nodal centralities, predominately in the caudate nucleus and default-mode regions, including the hippocampus, inferior parietal, medial frontal, and parietal regions, and reduced nodal centralities in the occipital, frontal (orbital part), and temporal regions. The altered nodal centralities in the left hippocampus and the left caudate nucleus were correlated with disease duration and severity.\n\n\nCONCLUSIONS\nThese results suggest that depressive disorder is associated with disruptions in the topological organization of functional brain networks and that this disruption may contribute to disturbances in mood and cognition in MDD patients.", "title": "" }, { "docid": "3d56f88bf8053258a12e609129237b19", "text": "Thepresentstudyfocusesontherelationships between entrepreneurial characteristics (achievement orientation, risk taking propensity, locus of control, and networking), e-service business factors (reliability, responsiveness, ease of use, and self-service), governmental support, and the success of e-commerce entrepreneurs. Results confirm that the achievement orientation and locus of control of founders and business emphasis on reliability and ease of use functions of e-service quality are positively related to the success of e-commerce entrepreneurial ventures in Thailand. Founder risk taking and networking, e-service responsiveness and self-service, and governmental support are found to be non-significant.", "title": "" }, { "docid": "6bdb8048915000b2d6c062e0e71b8417", "text": "Depressive disorders are the most typical disease affecting many different factors of humanity. University students may be at increased risk of depression owing to the pressure and stress they encounter. Therefore, the purpose of this study is comparing the level of depression among male and female athletes and non-athletes undergraduate student of private university in Esfahan, Iran. The participants in this research are composed of 400 male and female athletes as well as no-athletes Iranian undergraduate students. The Beck depression test (BDI) was employed to measure the degree of depression. T-test was used to evaluate the distinction between athletes and non-athletes at P≤0.05. The ANOVA was conducted to examine whether there was a relationship between level of depression among non-athletes and athletes. The result showed that the prevalence rate of depression among non-athlete male undergraduate students is significantly higher than that of athlete male students. The results also presented that level of depression among female students is much more frequent compared to males. This can be due to the fatigue and lack of energy that are more frequent among female in comparison to the male students. Physical activity was negatively related to the level of depression by severity among male and female undergraduate students. However, there is no distinct relationship between physical activity and level of depression according to the age of athlete and nonathlete male and female undergraduate students. This study has essential implications for clinical psychology due to the relationship between physical activity and prevalence of depression.", "title": "" }, { "docid": "e029e4722b722be2de14de9873d6a652", "text": "Greig cephalopolysyndactyly syndrome (GCPS) is a rare multiple congenital anomaly syndrome that is inherited in an autosomal dominant pattern and is caused by haploinsufficiency of the GLI3 gene. The syndrome typically includes preaxial or mixed pre- and postaxial polydactyly and cutaneous syndactyly, ocular hypertelorism, and macrocephaly in its typical forms, but sometimes includes hydrocephalus, seizures, mental retardation, and developmental delay in more severe cases. Patients with milder forms of GCPS can have subtle craniofacial dysmorphic features that are difficult to distinguish from normal variation. This article presents the spectrum of dysmorphic findings in GCPS highlighting some of its key presenting features to familiarize clinicians with the variable expressivity of the condition.", "title": "" }, { "docid": "9089faebf4b5fd84bf6e7466b788aab2", "text": "Decentralized anonymity infrastructures are still not in wide use today. While there are technical barriers to a secure robust design, our lack of understanding of the incentives to participate in such systems remains a major roadblock. Here we explore some reasons why anonymity systems are particularly hard to deploy, enumerate the incentives to participate either as senders or also as nodes, and build a general model to describe the effects of these incentives. We then describe and justify some simplifying assumptions to make the model manageable, and compare optimal strategies for participants based on a variety of", "title": "" }, { "docid": "44928aa4c5b294d1b8f24eaab14e9ce7", "text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.", "title": "" }, { "docid": "6b6fb43a134f3677e0cbfabc10fa8b54", "text": "The role of leadership as a core management function becomes extremely important in the case of rapid changes occurring in the market, and then within an organization that must adapt to new changes. Therefore, management becomes a central topic of study within the field of management. Terms of managers and leaders are not equal, and have not same meaning. The manager may be the person who operates in a stable business environment; a leader is needed in terms of uncertainty that identifies new opportunities for the company in a dynamic business environment. Therefore, leadership, charisma and inspiring employees, and the use of power, are becoming the key to the success of the enterprise market, and among its competitors. There is no dilemma that is leadership crucial for its success or not, the importance of leadership is unquestioned. Therefore the study of this area management as a management tool is importance for the success of the business. A leadership skill derives satisfaction of employees work activity. The company, which have no leader will result , with bad results, not motivated and disgruntled employees, while the opposite, organization that are based on knowledge and expertise in the field of management will be successful in their own business domain. Because of its importance in achieving the goals set out by managers and organizations, purpose of this paper is to examine the effects of a valid, leadership on the effectiveness of employees in the enterprises. The results show that a leadership skill affects the efficiency of enterprises and employee motivation. Leadership skills becoming a key success factor in business and in achieving the organization's objectives.", "title": "" }, { "docid": "d2ee6e2e3c7e851e75558ab69d159e08", "text": "the later stages of the development life cycle versus during production (Brooks 1995). Therefore, testing is one of the most critical and time-consuming phases of the software development life cycle, which accounts for 50 percent of the total cost of development (Brooks 1995). The testing phase should be planned carefully in order to save time and effort while detecting as many defects as possible. Different verification, validation, and testing strategies have been proposed so far to optimize the time and effort utilized during the testing phase: code reviews (Adrian, Branstad, and Cherniavsky 1982; Shull et al. 2002), inspections (Fagan 1976), and automated tools (Menzies, Greenwald, and Frank 2007; Nagappan, Ball, and Murphy 2006; Ostrand, Weyuker, and Bell 2005). Defect predictors improve the efficiency of the testing phase in addition to helping developers assess the quality and defectproneness of their software product (Fenton and Neil 1999). They also help managers in allocating resources. Most defect prediction models combine well-known methodologies and algorithms such as statistical techniques (Nagappan, Ball, and Murphy 2006; Ostrand, Weyuker, and Bell 2005; Zimmermann et al. 2004) and machine learning (Munson and Khoshgoftaar 1992; Fenton and Neil 1999; Lessmann et al. 2008; Moser, Pedrycz, and Succi 2008). They require historical data in terms of software metrics and actual defect rates, and combine these metrics and defect information as training data to learn which modules seem to be defect prone. Based on the knowledge from training data and software metrics acquired from a recently completed project, such tools can estimate defect-prone modules of that project. IAAI Articles", "title": "" }, { "docid": "ad131f6baec15a011252f484f1ef8f18", "text": "Recent studies have shown that Alzheimer's disease (AD) is related to alteration in brain connectivity networks. One type of connectivity, called effective connectivity, defined as the directional relationship between brain regions, is essential to brain function. However, there have been few studies on modeling the effective connectivity of AD and characterizing its difference from normal controls (NC). In this paper, we investigate the sparse Bayesian Network (BN) for effective connectivity modeling. Specifically, we propose a novel formulation for the structure learning of BNs, which involves one L1-norm penalty term to impose sparsity and another penalty to ensure the learned BN to be a directed acyclic graph - a required property of BNs. We show, through both theoretical analysis and extensive experiments on eleven moderate and large benchmark networks with various sample sizes, that the proposed method has much improved learning accuracy and scalability compared with ten competing algorithms. We apply the proposed method to FDG-PET images of 42 AD and 67 NC subjects, and identify the effective connectivity models for AD and NC, respectively. Our study reveals that the effective connectivity of AD is different from that of NC in many ways, including the global-scale effective connectivity, intra-lobe, inter-lobe, and inter-hemispheric effective connectivity distributions, as well as the effective connectivity associated with specific brain regions. These findings are consistent with known pathology and clinical progression of AD, and will contribute to AD knowledge discovery.", "title": "" }, { "docid": "2547be3cb052064fd1f995c45191e84d", "text": "With the new generation of full low floor passenger trains, the constraints of weight and size on the traction transformer are becoming stronger. The ultimate target weight for the transformer is 1 kg/kVA. The reliability and the efficiency are also becoming more important. To address these issues, a multilevel topology using medium frequency transformers has been developed. It permits to reduce the weight and the size of the system and improves the global life cycle cost of the vehicle. The proposed multilevel converter consists of sixteen bidirectional direct current converters (cycloconverters) connected in series to the catenary 15 kV, 16.7 Hz through a choke inductor. The cycloconverters are connected to sixteen medium frequency transformers (400 Hz) that are fed by sixteen four-quadrant converters connected in parallel to a 1.8 kV DC link with a 2f filter. The control, the command and the hardware of the prototype are described in detail.", "title": "" }, { "docid": "872d589cd879dee7d88185851b9546ab", "text": "Considering few treatments are available to slow or stop neurodegenerative disorders, such as Alzheimer’s disease and related dementias (ADRD), modifying lifestyle factors to prevent disease onset are recommended. The Voice, Activity, and Location Monitoring system for Alzheimer’s disease (VALMA) is a novel ambulatory sensor system designed to capture natural behaviours across multiple domains to profile lifestyle risk factors related to ADRD. Objective measures of physical activity and sleep are provided by lower limb accelerometry. Audio and GPS location records provide verbal and mobility activity, respectively. Based on a familiar smartphone package, data collection with the system has proven to be feasible in community-dwelling older adults. Objective assessments of everyday activity will impact diagnosis of disease and design of exercise, sleep, and social interventions to prevent and/or slow disease progression.", "title": "" }, { "docid": "888bb64b35edc7c4a44012b3d32e70e8", "text": "We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.", "title": "" }, { "docid": "5838d6a17e2223c6421da33d5985edd1", "text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).", "title": "" }, { "docid": "f9ed550f355fc3a89ffe2e95a8881ef8", "text": "In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models. © 2014 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "760a303502d732ece14e3ea35c0c6297", "text": "Data centers are experiencing a remarkable growth in the number of interconnected servers. Being one of the foremost data center design concerns, network infrastructure plays a pivotal role in the initial capital investment and ascertaining the performance parameters for the data center. Legacy data center network (DCN) infrastructure lacks the inherent capability to meet the data centers growth trend and aggregate bandwidth demands. Deployment of even the highest-end enterprise network equipment only delivers around 50% of the aggregate bandwidth at the edge of network. The vital challenges faced by the legacy DCN architecture trigger the need for new DCN architectures, to accommodate the growing demands of the ‘cloud computing’ paradigm. We have implemented and simulated the state of the art DCN models in this paper, namely: (a) legacy DCN architecture, (b) switch-based, and (c) hybrid models, and compared their effectiveness by monitoring the network: (a) throughput and (b) average packet delay. The presented analysis may be perceived as a background benchmarking study for the further research on the simulation and implementation of the DCN-customized topologies and customized addressing protocols in the large-scale data centers. We have performed extensive simulations under various network traffic patterns to ascertain the strengths and inadequacies of the different DCN architectures. Moreover, we provide a firm foundation for further research and enhancement in DCN architectures. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "1a8e1640a3e3feb12f60a7f3b9e2b1b3", "text": "The growth of email users has resulted in the dramatic increasing of the spam emails during the past few years. In this paper, four machine learning algorithms, which are Naı̈ve Bayesian (NB), neural network (NN), support vector machine (SVM) and relevance vector machine (RVM), are proposed for spam classification. An empirical evaluation for them on the benchmark spam filtering corpora is presented. The experiments are performed based on different training set size and extracted feature size. Experimental results show that NN classifier is unsuitable for using alone as a spam rejection tool. Generally, the performances of SVM and RVM classifiers are obviously superior to NB classifier. Compared with SVM, RVM is shown to provide the similar classification result with less relevance vectors and much faster testing time. Despite the slower learning procedure, RVM is more suitable than SVM for spam classification in terms of the applications that require low complexity. 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
90da735698eea402752431b424b4bb97
Parallel Multiscale Autoregressive Density Estimation
[ { "docid": "b0bd9a0b3e1af93a9ede23674dd74847", "text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.", "title": "" }, { "docid": "9ece98aee7056ff6c686c12bcdd41d31", "text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "title": "" } ]
[ { "docid": "d5696c9118437b81dc1818ecd8f18741", "text": "The contribution of this paper is to propose and experimentally validate an optimizing control strategy for power kites flying crosswind. The algorithm ensures the kite follows a reference path (control) and also periodically optimizes the reference path (efficiency optimization). The path-following part of the controller is capable of consistently following a reference path, despite significant time delays and wind variations, using position measurements only. The path-optimization part adjusts the reference path in order to maximize line tension. It uses a real-time optimization algorithm that combines off-line modeling knowledge and on-line measurements. The algorithm has been tested comprehensively on a small-scale prototype, and this paper focuses on experimental results.", "title": "" }, { "docid": "77b84c86b80d3e1c54b2ce4458a0cc52", "text": "We summarize three evaluations of an educational augmented reality application for geometry education, which have been conducted in 2000, 2003 and 2005 respectively. Repeated formative evaluations with more than 100 students guided the redesign of the application and its user interface throughout the years. We present and discuss the results regarding usability and simulator sickness providing guidelines on how to design augmented reality applications utilizing head-mounted displays.", "title": "" }, { "docid": "9308c1dfdf313f6268db9481723f533d", "text": "We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction (\"EPOC\"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.", "title": "" }, { "docid": "6ee8efea33f518d68f5582097c4c2929", "text": "The COMPOSE project aims to provide an open Marketplace for the Internet of Things as well as the necessary platform to support it. A necessary component of COMPOSE is an API that allows things, COMPOSE users and the platform to communicate. The COMPOSE API allows for things to push data to the platform, the platform to initiate asynchronous actions on the things, and COMPOSE users to retrieve and process data from the things. In this paper we present the design and implementation of the COMPOSE API, as well as a detailed description of the main key requirements that the API must satisfy. The API documentation and the source code for the platform are available online.", "title": "" }, { "docid": "a4e733379c2720e731d448ec80599c53", "text": "As digitalization sustainably alters industries and societies, small and medium-sized enterprises (SME) must initiate a digital transformation to remain competitive and to address the increasing complexity of customer needs. Although many enterprises encounter challenges in practice, research does not yet provide practicable recommendations to increase the feasibility of digitalization. Furthermore, SME frequently fail to fully realize the implications of digitalization for their organizational structures, strategies, and operations, and have difficulties to identify a suitable starting point for corresponding initiatives. In order to address these challenges, this paper uses the concept of Business Process Management (BPM) to define a set of capabilities for a management framework, which builds upon the paradigm of process orientation to cope with the various requirements of digital transformation. Our findings suggest that enterprises can use a functioning BPM as a starting point for digitalization, while establishing necessary digital capabilities subsequently.", "title": "" }, { "docid": "dc330168eb4ca331c8fbfa40b6abdd66", "text": "For multimedia communications, the low computational complexity of coder is required to integrate services of several media sources due to the limited computing capability of the personal information machine. The Multi-pulse Maximum Likelihood Quantization (MP-MLQ) algorithm with high computational complexity and high quality has been used in the G.723.1 standard codec. To reduce the computational complexity of the MP-MLQ method, this paper presents an efficient pre-selection scheme to simplify the excitation codebook search procedure which is computationally the most demand-ing. We propose a fast search algorithm which uses an energy function to predict the candidate pulses, and the codebook is redesigned to become the multi-track position structure. Simulation results show that the average of the perceptual evaluation of speech quality (PESQ) is degraded slightly, by only 0.056, and our proposed method can reduce computational complexity by about 52.8% relative to the original G.723.1 MP-MLQ computation load with perceptually negligible degradation. Our objective evaluations verify that the proposed method can provide speech quality comparable to that of the original MP-MLQ approach.", "title": "" }, { "docid": "562551c0f767ab8f467fccc8ff5b8244", "text": "In this research paper an attempt has been made to integrate the programmable logic controller (PLC) with elevator for developing its control system. Thus, this paper describes the application of programmable logic controller for elevator control system. The PLC used for this project is GE FANUC with six inputs and four outputs. The programming language used is ladder diagram.", "title": "" }, { "docid": "fa81463948ef7d6f5eb3f6e928567b15", "text": "Many web sites collect reviews of products and services and use them provide rankings of their quality. However, such rankings are not personalized. We investigate how the information in the reviews written by a particular user can be used to personalize the ranking she is shown. We propose a new technique, topic profile collaborative filtering, where we build user profiles from users’ review texts and use these profiles to filter other review texts with the eyes of this user. We verify on data from an actual review site that review texts and topic profiles indeed correlate with ratings, and show that topic profile collaborative filtering provides both a better mean average error when predicting ratings and a better approximation of user preference orders.", "title": "" }, { "docid": "5f3dc141b69eb50e17bdab68a2195e13", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "affbc04f6aa94e5d3d9665473384edec", "text": "Guided sparse depth upsampling aims to upsample an irregularly sampled sparse depth map when an aligned high-resolution color image is given as guidance. When deep convolutional neural networks (CNNs) become the optimal choice to many applications nowadays, how to deal with irregular and sparse data still remains a non-trivial problem. Inspired by the classical normalized convolution operation, this work proposes a normalized convolutional layer (NCL) implemented in CNNs. Sparse data are therefore explicitly considered in CNNs by the separation of both data and filters into a signal part and a certainty part. Based upon NCLs, we design a normalized convolutional neural network (NCNN) to perform guided sparse depth upsampling. Experiments on both indoor and outdoor datasets show that the proposed NCNN models achieve state-of-the-art upsampling performance. Moreover, the models using NCLs gain a great generalization ability to different sparsity levels.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "009f83c48787d956b8ee79c1d077d825", "text": "Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.", "title": "" }, { "docid": "3b4f63c2852f06f461da34cab7227a82", "text": "Mobile communication is an essential part of our daily lives. Therefore, it needs to be secure and reliable. In this paper, we study the security of feature phones, the most common type of mobile phone in the world. We built a framework to analyze the security of SMS clients of feature phones. The framework is based on a small GSM base station, which is readily available on the market. Through our analysis we discovered vulnerabilities in the feature phone platforms of all major manufacturers. Using these vulnerabilities we designed attacks against end-users as well as mobile operators. The threat is serious since the attacks can be used to prohibit communication on a large scale and can be carried out from anywhere in the world. Through further analysis we determined that such attacks are amplified by certain configurations of the mobile network. We conclude our research by providing a set of countermeasures.", "title": "" }, { "docid": "96a10ef46ebc1b1a4075d874bdfabe50", "text": "Bump mapping produces realistic shading by perturbing normal vectors to a surface, but does not show the shadows that the bumps cast on nearby parts of the same surface. In this paper, these shadows are found from precomputed tables of horizon angles, listing, for each position entry, the elevation of the horizon in a sampled collection of directions. These tables are made for bumps on a standard flat surface, and then a transformation is developed so that the same tables can be used for an arbitrary curved parametrized surface patch. This necessitates a new method for scaling the bump size to the patch size. Incremental calculations can be used in a scan line algorithm for polygonal surface approximations. The errors in the bump shadows are discussed, as well as their anti-aliasing. (An earlier version of this article appeared as Max [10].)", "title": "" }, { "docid": "a34e182fd182f493d2823dd42a7e5001", "text": "Various research communities have independently arrived at stream processing as a programming model for efficient and parallel computing. These communities include digital signal processing, databases, operating systems, and complex event processing. Since each community faces applications with challenging performance requirements, each of them has developed some of the same optimizations, but often with conflicting terminology and unstated assumptions. This article presents a survey of optimizations for stream processing. It is aimed both at users who need to understand and guide the system’s optimizer and at implementers who need to make engineering tradeoffs. To consolidate terminology, this article is organized as a catalog, in a style similar to catalogs of design patterns or refactorings. To make assumptions explicit and help understand tradeoffs, each optimization is presented with its safety constraints (when does it preserve correctness?) and a profitability experiment (when does it improve performance?). We hope that this survey will help future streaming system builders to stand on the shoulders of giants from not just their own community.", "title": "" }, { "docid": "10cc52c08da8118a220e436bc37e8beb", "text": "The most common approach in text mining classification tasks is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. Unlike the common approach, we present a method that uses only character p-grams (also known as n-grams) as features for the Arabic Dialect Identification (ADI) Closed Shared Task of the DSL 2016 Challenge. The proposed approach combines several string kernels using multiple kernel learning. In the learning stage, we try both Kernel Discriminant Analysis (KDA) and Kernel Ridge Regression (KRR), and we choose KDA as it gives better results in a 10-fold cross-validation carried out on the training set. Our approach is shallow and simple, but the empirical results obtained in the ADI Shared Task prove that it achieves very good results. Indeed, we ranked on the second place with an accuracy of 50.91% and a weighted F1 score of 51.31%. We also present improved results in this paper, which we obtained after the competition ended. Simply by adding more regularization into our model to make it more suitable for test data that comes from a different distribution than training data, we obtain an accuracy of 51.82% and a weighted F1 score of 52.18%. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral, as it does not require any NLP tools.", "title": "" }, { "docid": "0e644fc1c567356a2e099221a774232c", "text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.", "title": "" }, { "docid": "98a820c806b392e18b38d075b91a4fe9", "text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.", "title": "" }, { "docid": "53598a996f31476b32871cf99f6b84f0", "text": "The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track included three tasks involving: (1A) identifying relationships between citing documents and the referred document, (1B) classifying the discourse facets, and (2) generating the abstractive summary. The dataset comprised 30 annotated sets of citing and reference papers from the open access research papers in the CL domain. This overview paper describes the participation and the official results of the second CL-SciSumm Shared Task, organized as a part of the Joint Workshop onBibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2016), held in New Jersey,USA in June, 2016. The annotated dataset used for this shared task and the scripts used for evaluation can be accessed and used by the community at: https://github.com/WING-NUS/scisumm-corpus.", "title": "" }, { "docid": "bbd5a204986f546b00dbcba8fbca75be", "text": "We present a novel keyword spotting (KWS) system that uses contextual automatic speech recognition (ASR). For voice-activated devices, it is common that a KWS system is run on the device in order to quickly detect a trigger phrase (e.g. “Ok Google”). After the trigger phrase is detected, the audio corresponding to the voice command that follows is streamed to the server. The audio is transcribed by the server-side ASR system and semantically processed to generate a response which is sent back to the device. Due to limited resources on the device, the device KWS system might introduce false accepts (FA) and false rejects (FR) that can cause an unsatisfactory user experience. We describe a system that uses server-side contextual ASR and trigger phrase non-terminals to improve overall KWS accuracy. We show that this approach can significantly reduce the FA rate (by 89%) while minimally increasing the FR rate (by 0.2%). Furthermore, we show that this system significantly improves the ASR quality, reducing Word Error Rate (WER) (by 10% to 50% relative), and allows the user to speak seamlessly, without pausing between the trigger phrase and the voice command.", "title": "" } ]
scidocsrr
bed4993fb65660e961c0fd748b8d32e0
AC Versus DC Distribution SystemsDid We Get it Right?
[ { "docid": "819f6b62eb3f8f9d60437af28c657935", "text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.", "title": "" }, { "docid": "56b58efbeab10fa95e0f16ad5924b9e5", "text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.", "title": "" } ]
[ { "docid": "a14ac26274448e0a7ecafdecae4830f9", "text": "Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for computational systems and autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. Although significant advances have been made in domain-specific learning with neural networks, extensive research efforts are required for the development of robust lifelong learning on autonomous agents and robots. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration.", "title": "" }, { "docid": "b9ef5cf1b54af92a76c8d9d9825c2cd8", "text": "A private BitTorrent site (also known as a \"Bit Torrent darknet\") is a collection of torrents that can only be accessed by members of the darknet community. The private BitTorrent sites also have incentive policies which encourage users to continue to seed files after completing downloading. Although there are at least 800 independent BitTorrent darknets in the Internet, they have received little attention in the research community to date. We examine BitTorrent darknets from macroscopic, medium-scopic and microscopic perspectives. For the macroscopic analysis, we consider 800+ private sites to obtain a broad picture of the darknet landscape, and obtain a rough estimate of the total number of files, accounts, and simultaneous peers within the entire darknet landscape. Although the size of each private site is relatively small, we find the aggregate size of the darknet landscape to be surprisingly large. For the medium-scopic analysis, we investigate content overlap between four private sites and the public BitTorrent ecosystem. For the microscopic analysis, we explore in-depth one private site and examine its user behavior. We observe that the seed-to-leecher ratios and upload-to-download ratios are much higher than in the public ecosystem. The macroscopic, medium-scopic and microscopic analyses when combined provide a vivid picture of the darknet landscape, and provide insight into how the darknet landscape differs from the public BitTorrent ecosystem.", "title": "" }, { "docid": "6567ac7db83688e1bf290c7491a16bc7", "text": "In this paper we present our participation to SemEval-2018 Task 8 subtasks 1 & 2 respectively. We developed Convolution Neural Network system for malware sentence classification (subtask 1) and Conditional Random Fields system for malware token label prediction (subtask 2). We experimented with couple of word embedding strategies, feature sets and achieved competitive performance across the two subtasks. Code is made available at https://bitbucket.org/ vishnumani2009/securenlp", "title": "" }, { "docid": "11b2da0b86180878e8d5031a9069adae", "text": "PURPOSE\nThis article describes a cancer-related advocacy skill set that can be acquired through a learning process.\n\n\nOVERVIEW\nCancer survivorship is a process rather than a stage or time point, and it involves a continuum of events from diagnosis onward. There exists little consensus about what underlying processes explain different levels of long term functioning, but skills necessary for positive adaptation to cancer have been identified from both the professional literature and from the rich experiences of cancer survivors.\n\n\nCLINICAL IMPLICATIONS\nHealthcare practitioners need to be more creative and assertive in fostering consumer empowerment and should incorporate advocacy training into care plans. Strategies that emphasize personal competency and increase self-advocacy capabilities enable patients to make the best possible decisions for themselves regarding their cancer care. In addition, oncology practitioners must become informed advocacy partners with their patients in the public debate about healthcare and cancer care delivery.", "title": "" }, { "docid": "0573cb8c7eb10c5acfe59fc2d0de08e9", "text": "Players in the online ad ecosystem are struggling to acquire the user data required for precise targeting. Audience look-alike modeling has the potential to alleviate this issue, but models’ performance strongly depends on quantity and quality of available data. In order to maximize the predictive performance of our look-alike modeling algorithms, we propose two novel hybrid filtering techniques that utilize the recent neural probabilistic language model algorithm doc2vec. We apply these methods to data from a large mobile ad exchange and additional app metadata acquired from the Apple App store and Google Play store. First, we model mobile app users through their app usage histories and app descriptions (user2vec). Second, we introduce context awareness to that model by incorporating additional user and app-related metadata in model training (context2vec). Our findings are threefold: (1) the quality of recommendations provided by user2vec is notably higher than current state-of-the-art techniques. (2) User representations generated through hybrid filtering using doc2vec prove to be highly valuable features in supervised machine learning models for look-alike modeling. This represents the first application of hybrid filtering user models using neural probabilistic language models, specifically doc2vec, in look-alike modeling. (3) Incorporating context metadata in the doc2vec model training process to introduce context awareness has positive effects on performance and is superior to directly including the data as features in the downstream supervised models.", "title": "" }, { "docid": "c2195ae053d1bbf712c96a442a911e31", "text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.", "title": "" }, { "docid": "70781625aa7e95af8fc9e092f0b2c469", "text": "Software Defined Networking (SDN) provides opportunities for network verification and debugging by offering centralized visibility of the data plane. This has enabled both offline and online data-plane verification. However, little work has gone into the verification of time-varying properties (e.g., dynamic access control), where verification conditions change dynamically in response to application logic, network events, and external stimulus (e.g., operator requests).\n This paper introduces an assertion language to support verifying and debugging SDN applications with dynamically changing verification conditions. The language allows programmers to annotate controller applications with C-style assertions about the data plane. Assertions consist of regular expressions on paths to describe path properties for classes of packets, and universal and existential quantifiers that range over programmer-defined sets of hosts, switches, or other network entities. As controller programs dynamically add and remove elements from these sets, they generate new verification conditions that the existing data plane must satisfy. This work proposes an incremental data structure together with an underlying verification engine, to avoid naively re-verifying the entire data plane as these verification conditions change. To validate our ideas, we have implemented a debugging library on top of a modified version of VeriFlow, which is easily integrated into existing controller systems with minimal changes. Using this library, we have verified correctness properties for applications on several controller platforms.", "title": "" }, { "docid": "547ce0778d8d51d96a610fb72b6bb4e9", "text": "Applications in cyber-physical systems are increasingly coupled with online instruments to perform long-running, continuous data processing. Such “always on” dataflow applications are dynamic, where they need to change the applications logic and performance at runtime, in response to external operational needs. F`oε is a continuous dataflow framework that is designed to be adaptive for dynamic applications on Cloud infrastructure. It offers advanced dataflow patterns like BSP and MapReduce for flexible and holistic composition of streams and files, and supports dynamic recomposition at runtime with minimal impact on the execution. Adaptive resource allocation strategies allow our framework to effectively use elastic Cloud resources to meet varying data rates. We illustrate the design patterns of F`oε by running an integration pipeline and a tweet clustering application from the Smart Power Grids domain on a private Eucalyptus Cloud. The responsiveness of our resource adaptation is validated through simulations for periodic, bursty and random workloads.", "title": "" }, { "docid": "254c3fd35436b95a2ec56693042fc1da", "text": "Car detection and identification is an important task in the area of traffic control and management. Typically, to tackle this task, large datasets and domain-specific features are used to best fit the data. In our project, we implement, train, and test several state-of-the-art classifiers trained on domain-general datasets for the task of identifying the make and models of cars from various angles and different settings, with the added constraint of limited data and time. We experiment with different levels of transfer learning for fitting these models over to our domain. We report and compare these results to that of baseline models, and discuss the advantages of this approach.", "title": "" }, { "docid": "e8f424ee75011e7cf9c2c3cbf5ea5037", "text": "BACKGROUND\nEmotional distress is an increasing public health problem and Hatha yoga has been claimed to induce stress reduction and empowerment in practicing subjects. We aimed to evaluate potential effects of Iyengar Hatha yoga on perceived stress and associated psychological outcomes in mentally distressed women.\n\n\nMATERIAL/METHODS\nA controlled prospective non-randomized study was conducted in 24 self-referred female subjects (mean age 37.9+/-7.3 years) who perceived themselves as emotionally distressed. Subjects were offered participation in one of two subsequential 3-months yoga programs. Group 1 (n=16) participated in the first class, group 2 (n=8) served as a waiting list control. During the yoga course, subjects attended two-weekly 90-min Iyengar yoga classes. Outcome was assessed on entry and after 3 months by Cohen Perceived Stress Scale, State-Trait Anxiety Inventory, Profile of Mood States, CESD-Depression Scale, Bf-S/Bf-S' Well-Being Scales, Freiburg Complaint List and ratings of physical well-being. Salivary cortisol levels were measured before and after an evening yoga class in a second sample.\n\n\nRESULTS\nCompared to waiting-list, women who participated in the yoga-training demonstrated pronounced and significant improvements in perceived stress (P<0.02), State and Trait Anxiety (P<0.02 and P<0.01, respectively), well-being (P<0.01), vigor (P<0.02), fatigue (P<0.02) and depression (P<0.05). Physical well-being also increased (P<0.01), and those subjects suffering from headache or back pain reported marked pain relief. Salivary cortisol decreased significantly after participation in a yoga class (P<0.05).\n\n\nCONCLUSIONS\nWomen suffering from mental distress participating in a 3-month Iyengar yoga class show significant improvements on measures of stress and psychological outcomes. Further investigation of yoga with respect to prevention and treatment of stress-related disease and of underlying mechanism is warranted.", "title": "" }, { "docid": "4a043a02f3fad07797245b0a2c4ea4c5", "text": "The worldwide population of people over the age of 65 has been predicted to more than double from 1990 to 2025. Therefore, ubiquitous health-care systems have become an important topic of research in recent years. In this paper, an integrated system for portable electrocardiography (ECG) monitoring, with an on-board processor for time–frequency analysis of heart rate variability (HRV), is presented. The main function of proposed system comprises three parts, namely, an analog-to-digital converter (ADC) controller, an HRV processor, and a lossless compression engine. At the beginning, ECG data acquired from front-end circuits through the ADC controller is passed through the HRV processor for analysis. Next, the HRV processor performs real-time analysis of time–frequency HRV using the Lomb periodogram and a sliding window configuration. The Lomb periodogram is suited for spectral analysis of unevenly sampled data and has been applied to time–frequency analysis of HRV in the proposed system. Finally, the ECG data are compressed by 2.5 times using the lossless compression engine before output using universal asynchronous receiver/transmitter (UART). Bluetooth is employed to transmit analyzed HRV data and raw ECG data to a remote station for display or further analysis. The integrated ECG health-care system design proposed has been implemented using UMC 90 nm CMOS technology. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a11ed66e5368060be9585022db65c2ad", "text": "This article provides a historical context of evolutionary psychology and feminism, and evaluates the contributions to this special issue of Sex Roles within that context. We briefly outline the basic tenets of evolutionary psychology and articulate its meta-theory of the origins of gender similarities and differences. The article then evaluates the specific contributions: Sexual Strategies Theory and the desire for sexual variety; evolved standards of beauty; hypothesized adaptations to ovulation; the appeal of risk taking in human mating; understanding the causes of sexual victimization; and the role of studies of lesbian mate preferences in evaluating the framework of evolutionary psychology. Discussion focuses on the importance of social and cultural context, human behavioral flexibility, and the evidentiary status of specific evolutionary psychological hypotheses. We conclude by examining the potential role of evolutionary psychology in addressing social problems identified by feminist agendas.", "title": "" }, { "docid": "6649b5482a9a5413059ff4f9446223c6", "text": "The emergence of drug resistance to traditional chemotherapy and newer targeted therapies in cancer patients is a major clinical challenge. Reactivation of the same or compensatory signaling pathways is a common class of drug resistance mechanisms. Employing drug combinations that inhibit multiple modules of reactivated signaling pathways is a promising strategy to overcome and prevent the onset of drug resistance. However, with thousands of available FDA-approved and investigational compounds, it is infeasible to experimentally screen millions of possible drug combinations with limited resources. Therefore, computational approaches are needed to constrain the search space and prioritize synergistic drug combinations for preclinical studies. In this study, we propose a novel approach for predicting drug combinations through investigating potential effects of drug targets on disease signaling network. We first construct a disease signaling network by integrating gene expression data with disease-associated driver genes. Individual drugs that can partially perturb the disease signaling network are then selected based on a drug-disease network \"impact matrix\", which is calculated using network diffusion distance from drug targets to signaling network elements. The selected drugs are subsequently clustered into communities (subgroups), which are proposed to share similar mechanisms of action. Finally, drug combinations are ranked according to maximal impact on signaling sub-networks from distinct mechanism-based communities. Our method is advantageous compared to other approaches in that it does not require large amounts drug dose response data, drug-induced \"omics\" profiles or clinical efficacy data, which are not often readily available. We validate our approach using a BRAF-mutant melanoma signaling network and combinatorial in vitro drug screening data, and report drug combinations with diverse mechanisms of action and opportunities for drug repositioning.", "title": "" }, { "docid": "2cd905573be23462b5768e2dcdf8847b", "text": "Identity verification is an increasingly important process in our daily lives. Whether we need to use our own equipment or to prove our identity to third parties in order to use services or gain access to physical places, we are constantly required to declare our identity and prove our claim. Traditional authentication methods fall into two categories: proving that you know something (i.e., password-based authentication) and proving that you own something (i.e., token-based authentication). These methods connect the identity with an alternate and less rich representation, for instance a password, that can be lost, stolen, or shared. A solution to these problems comes from biometric recognition systems. Biometrics offers a natural solution to the authentication problem, as it contributes to the construction of systems that can recognize people by the analysis of their anatomical and/or behavioral characteristics. With biometric systems, the representation of the identity is something that is directly derived from the subject, therefore it has properties that a surrogate representation, like a password or a token, simply cannot have (Jain et al. (2006; 2004); Prabhakar et al. (2003)). The strength of a biometric system is determined mainly by the trait that is used to verify the identity. Plenty of biometric traits have been studied and some of them, like fingerprint, iris and face, are nowadays used in widely deployed systems. Today, one of the most important research directions in the field of biometrics is the characterization of novel biometric traits that can be used in conjunction with other traits, to limit their shortcomings or to enhance their performance. The aim of this chapter is to introduce the reader to the usage of heart sounds for biometric recognition, describing the strengths and the weaknesses of this novel trait and analyzing in detail the methods developed so far and their performance. The usage of heart sounds as physiological biometric traits was first introduced in Beritelli & Serrano (2007), in which the authors proposed and started exploring this idea. Their system is based on the frequency analysis, by means of the Chirp z-Transform (CZT), of the sounds produced by the heart during the closure of the mitral tricuspid valve and during the closure of the aortic pulmonary valve. These sounds, called S1 and S2, are extracted from the input 11", "title": "" }, { "docid": "f0ca75d480ca80ab9c3f8ea35819d064", "text": "Purpose – The purpose of this paper is to evaluate the influence of psychological hardiness, social judgment, and “Big Five” personality dimensions on leader performance in U.S. military academy cadets at West Point. Design/methodology/approach – Army Cadets were studied in two different organizational contexts: (a)summer field training, and (b)during academic semesters. Leader performance was measured with leadership grades (supervisor ratings) aggregated over four years at West Point. Findings After controlling for general intellectual abilities, hierarchical regression results showed leader performance in the summer field training environment is predicted by Big Five Extraversion, and Hardiness, and a trend for Social Judgment. During the academic period context, leader performance is predicted by mental abilities, Big Five Conscientiousness, and Hardiness, with a trend for Social Judgment. Research limitations/implications Results confirm the importance of psychological hardiness, extraversion, and conscientiousness as factors influencing leader effectiveness, and suggest that social judgment aspects of emotional intelligence can also be important. These results also show that different Big Five personality factors may influence leadership in different organizational", "title": "" }, { "docid": "6753c81b82b505ee8707e0e8f988d71d", "text": "Music and its attributes have been used in cryptography from early days. Today music is vastly used in information hiding with the use of Steganography techniques. This paper proposes an alternative to steganography by designing an algorithm for the encryption of text message into music and its attributes. The proposed algorithm converts the plain text message into a musical piece by replacing the text characters of the message by mathematically generated musical notes. The sequence of musical notes generated for the particular character sequence of plain text message mimic a musical pattern. This musical pattern is sent to the receiver as a music file. The seed value for encryption/decryption key is sent using the asymmetric algorithm RSA, where the key maps the letters corresponding to a musical note. The encryption key used is an n x n matrix and it will be generated using the seed value for the key on both sender and receiver ends.", "title": "" }, { "docid": "9a4bd291522b19ab4a6848b365e7f546", "text": "This paper reports on modern approaches in Information Extraction (IE) and its two main sub-tasks of Named Entity Recognition (NER) and Relation Extraction (RE). Basic concepts and the most recent approaches in this area are reviewed, which mainly include Machine Learning (ML) based approaches and the more recent trend to Deep Learning (DL)", "title": "" }, { "docid": "91617f4ed1fbd5d37368caa326a91154", "text": "Different evaluation measures assess different character istics of machine learning algorithms. The empirical evaluation of alg orithms and classifiers is a matter of on-going debate among researchers. Most measu res in use today focus on a classifier’s ability to identify classes correctl y. We note other useful properties, such as failure avoidance or class discrimi nation, and we suggest measures to evaluate such properties. These measures – Youd en’s index, likelihood, Discriminant power – are used in medical diagnosis. We show that they are interrelated, and we apply them to a case study from the fie ld of electronic negotiations. We also list other learning problems which ma y benefit from the application of these measures.", "title": "" }, { "docid": "7325562e1ff336751aac739e9735ea2c", "text": "Vol. XLV (December 2008), 741–756 741 © 2008, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Chi Kin (Bennett) Yim is Professor of Marketing (e-mail: yim@ business.hku.hk), and David K. Tse is Chair Professor of International Marketing (e-mail: davidtse@business.hku.hk), School of Business, University of Hong Kong. Kimmy Wa Chan is Assistant Professor of Marketing, Department of Management and Marketing, Hong Kong Polytechnic University (e-mail: mskimmy@polyu.edu.hk).This research was funded by a Hong Kong SAR RGC research grant awarded to the first two authors. Ruth Bolton and Jeffrey Inman served as associate editors for this article. CHI KIN (BENNETT) YIM, DAVID K. TSE, and KIMMY WA CHAN*", "title": "" }, { "docid": "956660129d1710cf1fa28b8c5f5086b1", "text": "Using magnetic field data as fingerprints for localization in indoor environment has become popular in recent years. Particle filter is often used to improve accuracy. However, most of existing particle filter based approaches either are heavily affected by motion estimation errors, which makes the system unreliable, or impose strong restrictions on smartphone such as fixed phone orientation, which is not practical for real-life use. In this paper, we present an indoor localization system named MaLoc, built on our proposed augmented particle filter. We create several innovations on the motion model, the measurement model and the resampling model to enhance the traditional particle filter. To minimize errors in motion estimation and improve the robustness of particle filter, we augment the particle filter with a dynamic step length estimation algorithm and a heuristic particle resampling algorithm. We use a hybrid measurement model which combines a new magnetic fingerprinting model and the existing magnitude fingerprinting model to improve the system performance and avoid calibrating different smartphone magnetometers. In addition, we present a novel localization quality estimation method and a localization failure detection method to address the \"Kidnapped Robot Problem\" and improve the overall usability. Our experimental studies show that MaLoc achieves a localization accuracy of 1~2.8m on average in a large building.", "title": "" } ]
scidocsrr
e0f002f256a89bb86e6891743aa4aa4c
A PID BASED ANFIS & FUZZY CONTROL OF INVERTED PENDULUM ON INCLINED PLANE ( IPIP )
[ { "docid": "cc9ee1b5111974da999d8c52ba393856", "text": "The back propagation (BP) neural network algorithm is a multi-layer feedforward network trained according to error back propagation algorithm and is one of the most widely applied neural network models. BP network can be used to learn and store a great deal of mapping relations of input-output model, and no need to disclose in advance the mathematical equation that describes these mapping relations. Its learning rule is to adopt the steepest descent method in which the back propagation is used to regulate the weight value and threshold value of the network to achieve the minimum error sum of square. This paper focuses on the analysis of the characteristics and mathematical theory of BP neural network and also points out the shortcomings of BP algorithm as well as several methods for improvement.", "title": "" } ]
[ { "docid": "750846bc27dc013bd0d392959caf3ecc", "text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.", "title": "" }, { "docid": "704598402da135b6b7e3251de4c6edf8", "text": "Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.", "title": "" }, { "docid": "bbfcce9ec7294cb542195cca1dfbcc6c", "text": "We propose a new algorithm, DASSO, for fitting the entire coef fici nt path of the Dantzig selector with a similar computational cost to the LA RS algorithm that is used to compute the Lasso. DASSO efficiently constructs a piecewi s linear path through a sequential simplex-like algorithm, which is remarkably si milar to LARS. Comparison of the two algorithms sheds new light on the question of how th e Lasso and Dantzig selector are related. In addition, we provide theoretical c onditions on the design matrix, X, under which the Lasso and Dantzig selector coefficient esti mates will be identical for certain tuning parameters. As a consequence, in many instances, we are able to extend the powerful non-asymptotic bounds that have been de veloped for the Dantzig selector to the Lasso. Finally, through empirical studies o f imulated and real world data sets we show that in practice, when the bounds hold for th e Dantzig selector, they almost always also hold for the Lasso. Some key words : Dantzig selector; LARS; Lasso; DASSO", "title": "" }, { "docid": "cb815a01960490760e2ac581e26f4486", "text": "To solve the weakly-singular Volterra integro-differential equations, the combined method of the Laplace Transform Method and the Adomian Decomposition Method is used. As a result, series solutions of the equations are constructed. In order to explore the rapid decay of the equations, the pade approximation is used. The results present validity and great potential of the method as a powerful algorithm in order to present series solutions for singular kind of differential equations.", "title": "" }, { "docid": "77e2aac8b42b0b9263278280d867cb40", "text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.", "title": "" }, { "docid": "40ba65504518383b4ca2a6fabff261fe", "text": "Fig. 1. Noirot and Quennedey's original classification of insect exocrine glands, based on a rhinotermitid sternal gland. The asterisk indicates a subcuticular space. Abbreviations: C, cuticle; D, duct cells; G1, secretory cells class 1; G2, secretory cells class 2; G3, secretory cells class 3; S, campaniform sensilla (modified after Noirot and Quennedey, 1974). ‘Describe the differences between endocrine and exocrine glands’, it sounds a typical exam question from a general biology course during our time at high school. Because of their secretory products being released to the outside world, exocrine glands definitely add flavour to our lives. Everybody is familiar with their secretions, from the salty and perhaps unpleasantly smelling secretions from mammalian sweat glands to the sweet exudates of the honey glands used by some caterpillars to attract ants, from the most painful venoms of bullet ants and scorpions to the precious wax that honeybees use to make their nest combs. Besides these functions, exocrine glands are especially known for the elaboration of a broad spectrum of pheromonal substances, and can also be involved in the production of antibiotics, lubricants, and digestive enzymes. Modern research in insect exocrinology started with the classical works of Charles Janet, who introduced a histological approach to the insect world (Billen and Wilson, 2007). The French school of insect anatomy remained strong since then, and the commonly used classification of insect exocrine glands generally follows the pioneer paper of Charles Noirot and Andr e Quennedey (1974). These authors were leading termite researchers using their extraordinary knowledge on termite glands to understand related phenomena, such as foraging and reproductive behaviour. They distinguish between class 1 with secretory cells adjoining directly to the cuticle, and class 3 with bicellular units made up of a large secretory cell and its accompanying duct cell that carries the secretion to the exterior (Fig. 1). The original classification included also class 2 secretory cells, but these are very rare and are only found in sternal and tergal glands of a cockroach and many termites (and also in the novel nasus gland described in this issue!). This classification became universally used, with the rather strange consequence that the vast majority of insect glands is illogically made up of class 1 and class 3 cells. In a follow-up paper, the uncommon class 2 cells were re-considered as oenocyte homologues (Noirot and Quennedey, 1991). Irrespectively of these objections, their 1974 pioneer paper is a cornerstone of modern works dealing with insect exocrine glands, as is also obvious in the majority of the papers in this special issue. This paper already received 545 citations at Web of Science and 588 at Google Scholar (both on 24 Aug 2015), so one can easily say that all researchers working on insect glands consider this work truly fundamental. Exocrine glands are organs of cardinal importance in all insects. The more common ones include mandibular and labial", "title": "" }, { "docid": "ddfd02c12c42edb2607a6f193f4c242b", "text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.", "title": "" }, { "docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e", "text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.", "title": "" }, { "docid": "3da8cb73f3770a803ca43b8e2a694ccc", "text": "We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.", "title": "" }, { "docid": "5536e605e0b8a25ee0a5381025484f60", "text": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure’s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.", "title": "" }, { "docid": "c168fdc6e1e19280aea2eb011ec7a3b1", "text": "OBJECTIVE\nThe study aimed to formulate an easy clinical approach that may be used by clinicians of all backgrounds to diagnose vulvar dermatological disorders.\n\n\nMATERIALS AND METHODS\nThe International Society for the Study of Vulvovaginal Disease appointed a committee with multinational members from the fields of dermatology, gynecology, and pathology and charged the committee to formulate a clinically based terminology and classification of vulvar dermatological disorders. The committee carried out its work by way of multiple rounds of e-mails extending over almost 2 year's time.\n\n\nRESULTS\nThe committee was able to formulate a consensus report containing terminology, classification, and a step-wise approach to clinical diagnosis of vulvar dermatological disorders. This report was presented and approved by the International Society for the Study of Vulvovaginal Disease at the XXI International Congress held in Paris, France, on September 3 to 8, 2011.\n\n\nCONCLUSIONS\nThe authors believe that the approach to terminology and classification as well as clinical diagnosis contained in this article allows clinicians to make highly accurate diagnoses of vulvar dermatological disorders within the clinical setting. This, in turn, will reduce the need for referrals and will improve the care for women with most vulvar disorders.", "title": "" }, { "docid": "2eac0a94204b24132e496639d759f545", "text": "Numerous algorithms have been proposed for transferring knowledge from a label-rich domain (source) to a label-scarce domain (target). Most of them are proposed for closed-set scenario, where the source and the target domain completely share the class of their samples. However, in practice, a target domain can contain samples of classes that are not shared by the source domain. We call such classes the “unknown class” and algorithms that work well in the open set situation are very practical. However, most existing distribution matching methods for domain adaptation do not work well in this setting because unknown target samples should not be aligned with the source. In this paper, we propose a method for an open set domain adaptation scenario, which utilizes adversarial training. This approach allows to extract features that separate unknown target from known target samples. During training, we assign two options to the feature generator: aligning target samples with source known ones or rejecting them as unknown target ones. Our method was extensively evaluated and outperformed other methods with a large margin in most settings.", "title": "" }, { "docid": "effd314d69f6775b80dbe5570e3f37d8", "text": "New paradigms in networking industry, such as Software Defined Networking (SDN) and Network Functions Virtualization (NFV), require the hypervisors to enable the execution of Virtual Network Functions in virtual machines (VMs). In this context, the virtual switch function is critical to achieve carrier grade performance, hardware independence, advanced features and programmability. SnabbSwitch is a virtual switch designed to run in user space with carrier grade performance targets, based on an efficient architecture which has driven the development of vhost-user (now also adopted by OVS-DPDK, the user space implementation of OVS based on Intel DPDK), easy to deploy and to program through its Lua scripting layer. This paper presents the SnabbSwitch virtual switch implementation along with its novelties (the vhost-user implementation and the usage of a trace compiler) and code optimizations, which have been merged in the mainline project repository. Extensive benchmarking activities, whose results are included in this paper, have been carried on to compare SnabbSwitch with other virtual switching solutions (i.e., OVS, OVS-DPDK, Linux Bridge, VFIO and SR-IOV). These results show that SnabbSwitch performs as well as hardware based solutions, such as SR-IOV and VFIO, while allowing for additional functional and flexible operation; they show also that SnabbSwitch is faster than the vhost-user based version (user space) of OVS-DPDK.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "62b64b2182bbcd92a6bd84aec8927166", "text": "Parasympathetic regulation of heart rate through the vagus nerve--often measured as resting respiratory sinus arrhythmia or cardiac vagal tone (CVT)--is a key biological correlate of psychological well-being. However, recent theorizing has suggested that many biological and psychological processes can become maladaptive when they reach extreme levels. This raises the possibility that CVT might not have an unmitigated positive relationship with well-being. In line with this reasoning, across 231 adult participants (Mage = 40.02 years; 52% female), we found that CVT was quadratically related to multiple measures of well-being, including life satisfaction and depressive symptoms. Individuals with moderate CVT had higher well-being than those with low or high CVT. These results provide the first direct evidence of a nonlinear relationship between CVT and well-being, adding to a growing body of research that has suggested some biological processes may cease being adaptive when they reach extreme levels.", "title": "" }, { "docid": "289694f2395a6a2afc7d86d475b9c02d", "text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "title": "" }, { "docid": "d00765c898151dd5977fab8e39c4d7e9", "text": "Knowledge graphs (KG) play a crucial role in many modern applications. However, constructing a KG from natural language text is challenging due to the complex structure of the text. Recently, many approaches have been proposed to transform natural language text to triples to obtain KGs. Such approaches have not yet provided efficient results for mapping extracted elements of triples, especially the predicate, to their equivalent elements in a KG. Predicate mapping is essential because it can reduce the heterogeneity of the data and increase the searchability over a KG. In this article, we propose T2KG, an automatic KG creation framework for natural language text, to more effectively map natural language text to predicates. In our framework, a hybrid combination of a rule-based approach and a similarity-based approach is presented for mapping a predicate to its corresponding predicate in a KG. Based on experimental results, the hybrid approach can identify more similar predicate pairs than a baseline method in the predicate mapping task. An experiment on KG creation is also conducted to investigate the performance of the T2KG. The experimental results show that the T2KG also outperforms the baseline in KG creation. Although KG creation is conducted in open domains, in which prior knowledge is not provided, the T2KG still achieves an F1 score of approximately 50% when generating triples in the KG creation task. In addition, an empirical study on knowledge population using various text sources is conducted, and the results indicate the T2KG could be used to obtain knowledge that is not currently available from DBpedia. key words: knowledge graph, knowledge discovery, knowledge extraction, linked data", "title": "" }, { "docid": "2b688f9ca05c2a79f896e3fee927cc0d", "text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.", "title": "" }, { "docid": "92d04ad5a9fa32c2ad91003213b1b86d", "text": "You're being asked to quantify usability improvements with statistics. But even with a background in statistics, you are hesitant to statistically analyze the data, as you may be unsure about which statistical tests to...", "title": "" }, { "docid": "1f5a2259bd57f35a604fb8d23538c741", "text": "Can peer-to-peer lending (P2P) crowdfunding disintermediate and mitigate information frictions in lending such that choices and outcomes for at least some borrowers and investors are improved? I offer a framing of issues and survey the nascent literature on P2P. On the investor side, P2P disintermediates an asset class of consumer loans, and investors seem to capture some rents associated with the removal of the cost of that financial intermediation. Risk and portfolio choice questions linger prior to any inference. On the borrower side, evidence suggests that proximate knowledge (direct or inferred) unearths soft information, and by implication, P2P should be able to offer pricing and/or access benefits to potential borrowers. However, social connections require costly certification (skin in the game) to inform credit risk. Early research suggests an ever-increasing scope for use of Big Data and incentivized re-intermediation of underwriting. I ask many more questions than current research can answer, hoping to motivate future research.", "title": "" } ]
scidocsrr
b458ce1c4b32894522418d88521b0413
Using Smartphones to Detect Car Accidents and Provide Situational Awareness to Emergency Responders
[ { "docid": "8718d91f37d12b8ff7658723a937ea84", "text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.", "title": "" } ]
[ { "docid": "f850321173db137674eb74a0dd2afc30", "text": "The relational data model has been dominant and widely used since 1970. However, as the need to deal with big data grows, new data models, such as Hadoop and NoSQL, were developed to address the limitation of the traditional relational data model. As a result, determining which data model is suitable for applications has become a challenge. The purpose of this paper is to provide insight into choosing the suitable data model by conducting a benchmark using Yahoo! Cloud Serving Benchmark (YCSB) on three different database systems: (1) MySQL for relational data model, (2) MongoDB for NoSQL data model, and (3) HBase for Hadoop framework. The benchmark was conducted by running four different workloads. Each workload is executed using a different increasing operation and thread count, while observing how their change respectively affects throughput, latency, and runtime.", "title": "" }, { "docid": "497fdaf295df72238f9ec0cb879b6a48", "text": "A vehicle or fleet management system is implemented for tracking the movement of the vehicle at any time from any location. This proposed system helps in real time tracking of the vehicle using a smart phone application. This method is easy and efficient when compared to other implementations. In emerging technology of developing IOT (Internet of Things) the generic 8 bit/16 bit micro controllers are replaced by 32bit micro controllers in the embedded systems. This has many advantages like use of 32bit micro controller’s scalability, reusability and faster execution speed. Implementation of RTOS is very much necessary for having a real time system. RTOS features are application portability, reusability, more efficient use of system resources. The proposed system uses a 32bit ARM7 based microcontroller with an embedded Real Time Operating System (RTOS).The vehicle unit application is written on FreeRTOS. The peripheral drivers like UART, External interrupt are developed for RTOS aware environment. The vehicle unit consists of a GPS/GPRS module where the position of the vehicle is got from the Global Positioning System (GPS) and the General Packet Radio Service (GPRS) is used to update the timely information of the vehicle position. The vehicle unit updates the location to the Fleet management application on the web server. The vehicle management is a java based web application integrated with MySQL Database. The web application in the proposed system is based on OpenGTS open source vehicle tracking application. A GoTrack Android application is configured to work with web application. The smart phone application also provides a separate login for administrator to add, edit and remove the vehicles on the fleet management system. The users and administrators can continuously monitor the vehicle using a smart phone application.", "title": "" }, { "docid": "92684148cd7d2a6a21657918015343b0", "text": "Radiative wireless power transfer (WPT) is a promising technology to provide cost-effective and real-time power supplies to wireless devices. Although radiative WPT shares many similar characteristics with the extensively studied wireless information transfer or communication, they also differ significantly in terms of design objectives, transmitter/receiver architectures and hardware constraints, and so on. In this paper, we first give an overview on the various WPT technologies, the historical development of the radiative WPT technology and the main challenges in designing contemporary radiative WPT systems. Then, we focus on the state-of-the-art communication and signal processing techniques that can be applied to tackle these challenges. Topics discussed include energy harvester modeling, energy beamforming for WPT, channel acquisition, power region characterization in multi-user WPT, waveform design with linear and non-linear energy receiver model, safety and health issues of WPT, massive multiple-input multiple-output and millimeter wave enabled WPT, wireless charging control, and wireless power and communication systems co-design. We also point out directions that are promising for future research.", "title": "" }, { "docid": "3bb9fc6e09c9ce13252a04d6978d1bfc", "text": "Recently, sparse coding has been successfully applied in visual tracking. The goal of this paper is to review the state-of-the-art tracking methods based on sparse coding. We first analyze the benefits of using sparse coding in visual tracking and then categorize these methods into appearance modeling based on sparse coding (AMSC) and target searching based on sparse representation (TSSR) as well as their combination. For each categorization, we introduce the basic framework and subsequent improvements with emphasis on their advantages and disadvantages. Finally, we conduct extensive experiments to compare the representative methods on a total of 20 test sequences. The experimental results indicate that: (1) AMSC methods significantly outperform TSSR methods. (2) For AMSC methods, both discriminative dictionary and spatial order reserved pooling operators are important for achieving high tracking accuracy. (3) For TSSR methods, the widely used identity pixel basis will degrade the performance when the target or candidate images are not aligned well or severe occlusion occurs. (4) For TSSR methods, ‘1 norm minimization is not necessary. In contrast, ‘2 norm minimization can obtain comparable performance but with lower computational cost. The open questions and future research topics are also discussed. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "50268ed4eb8f14966d9d0ec32b01429f", "text": "Women's empowerment is an important goal in achieving sustainable development worldwide. Offering access to microfinance services to women is one way to increase women's empowerment. However, empirical evidence provides mixed results with respect to its effectiveness. We reviewed previous research on the impact of microfinance services on different aspects of women's empowerment. We propose a Three-Dimensional Model of Women's Empowerment to integrate previous findings and to gain a deeper understanding of women's empowerment in the field of microfinance services. This model proposes that women's empowerment can take place on three distinct dimensions: (1) the micro-level, referring to an individuals' personal beliefs as well as actions, where personal empowerment can be observed (2) the meso-level, referring to beliefs as well as actions in relation to relevant others, where relational empowerment can be observed and (3) the macro-level, referring to outcomes in the broader, societal context where societal empowerment can be observed. Importantly, we propose that time and culture are important factors that influence women's empowerment. We suggest that the time lag between an intervention and its evaluation may influence when empowerment effects on the different dimensions occur and that the type of intervention influences the sequence in which the three dimensions can be observed. We suggest that cultures may differ with respect to which components of empowerment are considered indicators of empowerment and how women's position in society may influence the development of women's empowerment. We propose that a Three-Dimensional Model of Women's Empowerment should guide future programs in designing, implementing, and evaluating their interventions. As such our analysis offers two main practical implications. First, based on the model we suggest that future research should differentiate between the three dimensions of women's empowerment to increase our understanding of women's empowerment and to facilitate comparisons of results across studies and cultures. Second, we suggest that program designers should specify how an intervention should stimulate which dimension(s) of women's empowerment. We hope that this model inspires longitudinal and cross-cultural research to examine the development of women's empowerment on the personal, relational, and societal dimension.", "title": "" }, { "docid": "32acba3e072e0113759278c57ee2aee2", "text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.", "title": "" }, { "docid": "2ecd0bf132b3b77dc1625ef8d09c925b", "text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.", "title": "" }, { "docid": "1f1158ad55dc8a494d9350c5a5aab2f2", "text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).", "title": "" }, { "docid": "83e3ce2b70e1f06073fd0a476bf04ff7", "text": "Each year, a number of natural disasters strike across the globe, killing hundreds and causing billions of dollars in property and infrastructure damage. Minimizing the impact of disasters is imperative in today's society. As the capabilities of software and hardware evolve, so does the role of information and communication technology in disaster mitigation, preparation, response, and recovery. A large quantity of disaster-related data is available, including response plans, records of previous incidents, simulation data, social media data, and Web sites. However, current data management solutions offer few or no integration capabilities. Moreover, recent advances in cloud computing, big data, and NoSQL open the door for new solutions in disaster data management. In this paper, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM), with the objectives of 1) storing large amounts of disaster-related data from diverse sources, 2) facilitating search, and 3) supporting their interoperability and integration. Data are stored in a cloud environment using a combination of relational and NoSQL databases. The case study presented in this paper illustrates the use of Disaster-CDM on an example of simulation models.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "21393a1c52b74517336ef3e08dc4d730", "text": "The technical part of these Guidelines and Recommendations, produced under the auspices of EFSUMB, provides an introduction to the physical principles and technology on which all forms of current commercially available ultrasound elastography are based. A difference in shear modulus is the common underlying physical mechanism that provides tissue contrast in all elastograms. The relationship between the alternative technologies is considered in terms of the method used to take advantage of this. The practical advantages and disadvantages associated with each of the techniques are described, and guidance is provided on optimisation of scanning technique, image display, image interpretation and some of the known image artefacts.", "title": "" }, { "docid": "22eb9b1de056d03d15c0a3774a898cfd", "text": "Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.", "title": "" }, { "docid": "e472a8e75ddf72549aeb255aa3d6fb79", "text": "In the presence of normal sensory and motor capacity, intelligent behavior is widely acknowledged to develop from the interaction of short-and long-term memory. While the behavioral, cellular, and molecular underpinnings of the long-term memory process have long been associated with the hippocampal formation, and this structure has become a major model system for the study of memory, the neural substrates of specific short-term memory functions have more and more become identified with prefrontal cortical areas (Goldman-Rakic, 1987; Fuster, 1989). The special nature of working memory was first identified in studies of human cognition and modern neuro-biological methods have identified a specific population of neurons, patterns of their intrinsic and extrinsic circuitry, and signaling molecules that are engaged in this process in animals. In this article, I will first define key features of working memory and then descdbe its biological basis in primates. Distinctive Features of a Working Memory System Working memory is the term applied to the type of memory that is active and relevant only for a short period of time, usually on the scale of seconds. A common example of working memory is keeping in mind a newly read phone number until it is dialed and then immediately forgotten. This process has been captu red by the analogy to a mental sketch pad (Baddeley, 1986) an~l is clearly different from the permanent inscription on neuronal circuitry due to learning. The criterion-useful or relevant only transiently distinguishes working memory from the processes that have been variously termed semantic (Tulving, 1972) or procedural (Squire and Cohen, 1984) memory, processes that can be considered associative in the traditional sense, i.e., information acquired by the repeated contiguity between stimuli and responses and/or consequences. If semantic and procedural memory are the processes by which stimuli and events acquire archival permanence , working memory is the process for the retrieval and proper utilization of this acquired knowledge. In this context, the contents of working memory are as much on the output side of long-term storage sites as they are an important source of input to those sites. Considerable evidence is now at hand to demonstrate that the brain obeys the distinction between working and other forms of memory , and that the prefrontal cortex has a preeminent role mainly in the former (Goldman.Rakic, 1987). However, memory-guided behavior obviously reflects the operation of a widely distributed system of brain structures and psychological functions, and understanding …", "title": "" }, { "docid": "4f186e992cd7d5eadb2c34c0f26f4416", "text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …", "title": "" }, { "docid": "739db4358ac89d375da0ed005f4699ad", "text": "All doctors have encountered patients whose symptoms they cannot explain. These individuals frequently provoke despair and disillusionment. Many doctors make a link between inexplicable physical symptoms and assumed psychiatric ill­ ness. An array of adjectives in medicine apply to symptoms without established organic basis – ‘supratentorial’, ‘psychosomatic’, ‘functional’ – and these are sometimes used without reference to their real meaning. In psychiatry, such symptoms fall under the umbrella of the somatoform disorders, which includes a broad range of diagnoses. Conversion disorder is just one of these. Its meaning is not always well understood and it is often confused with somatisation disorder.† Our aim here is to clarify the notion of a conversion disorder (and the differences between conversion and other somatoform disorders) and to discuss prevalence, aetiology, management and prognosis.", "title": "" }, { "docid": "39958f4825796d62e7a5935d04d5175d", "text": "This paper presents a wireless system which enables real-time health monitoring of multiple patient(s). In health care centers patient's data such asheart rate needs to be constantly monitored. The proposed system monitors the heart rate and other such data of patient's body. For example heart rate is measured through a Photoplethysmograph. A transmitting module is attached which continuously transmits the encoded serial data using Zigbee module. A receiver unit is placed in doctor's cabin, which receives and decodes the data and continuously displays it on a User interface visible on PC/Laptop. Thus doctor can observe and monitor many patients at the same time. System also continuously monitors the patient(s) data and in case of any potential irregularities, in the condition of a patient, the alarm system connected to the system gives an audio-visual warning signal that the patient of a particular room needs immediate attention. In case, the doctor is not in his chamber, the GSM modem connected to the system also sends a message to all the doctors of that unit giving the room number of the patient who needs immediate care.", "title": "" }, { "docid": "7c86594614a6bd434ee4e749eb661cee", "text": "The ACT-R system is a general system for modeling a wide range of higher level cognitive processes. Recently, it has been embellished with a theory of how its higher level processes interact with a visual interface. This includes a theory of how visual attention can move across the screen, encoding information into a form that can be processed by ACT-R. This system is applied to modeling several classic phenomena in the literature that depend on the speed and selectivity with which visual attention can move across a visual display. ACT-R is capable of interacting with the same computer screens that subjects do and, as such, is well suited to provide a model for tasks involving human-computer interaction. In this article, we discuss a demonstration of ACT-R's application to menu selection and show that the ACT-R theory makes unique predictions, without estimating any parameters, about the time to search a menu. These predictions are confirmed. John R. Anderson is a cognitive scientist with an interest in cognitive architectures and intelligent tutoring systems; he is a Professor of Psychology and Computer Science at Carnegie Mellon University. Michael Matessa is a graduate student studying cognitive psychology at Carnegie Mellon University; his interests include cognitive architectures and modeling the acquisition of information from the environment. Christian Lebiere is a computer scientist with an interest in intelligent architectures; he is a Research Programmer in the Department of Psycholo and a graduate student in the School of Computer Science at Carnegie Me1 By on University. 440 ANDERSON, MATESSA, LEBIERE", "title": "" }, { "docid": "42a0e0ab1ae2b190c913e69367b85001", "text": "One of the most challenging problems facing network operators today is network attacks identification due to extensive number of vulnerabilities in computer systems and creativity of attackers. To address this problem, we present a deep learning approach for intrusion detection systems. Our approach uses Deep Auto-Encoder (DAE) as one of the most well-known deep learning models. The proposed DAE model is trained in a greedy layer-wise fashion in order to avoid overfitting and local optima. The experimental results on the KDD-CUP'99 dataset show that our approach provides substantial improvement over other deep learning-based approaches in terms of accuracy, detection rate and false alarm rate.", "title": "" }, { "docid": "1bdf1bfe81bf6f947df2254ae0d34227", "text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.", "title": "" } ]
scidocsrr
b788cf524ee3c5d7e09aa6869f8d5ab0
Object detection algorithm for segregating similar coloured objects and database formation
[ { "docid": "d4fa5b9d4530b12a394c1e98ea2793b1", "text": "Most successful object recognition systems rely on binary classification, deciding only if an object is present or not, but not providing information on the actual object location. To perform localization, one can take a sliding window approach, but this strongly increases the computational cost, because the classifier function has to be evaluated over a large set of candidate subwindows. In this paper, we propose a simple yet powerful branch-and-bound scheme that allows efficient maximization of a large class of classifier functions over all possible subimages. It converges to a globally optimal solution typically in sublinear time. We show how our method is applicable to different object detection and retrieval scenarios. The achieved speedup allows the use of classifiers for localization that formerly were considered too slow for this task, such as SVMs with a spatial pyramid kernel or nearest neighbor classifiers based on the chi2-distance. We demonstrate state-of-the-art performance of the resulting systems on the UIUC Cars dataset, the PASCAL VOC 2006 dataset and in the PASCAL VOC 2007 competition.", "title": "" }, { "docid": "550e84d58db67e1d89ac437654f4ccb6", "text": "Skin detection from images, typically used as a preprocessing step, has a wide range of applications such as dermatology diagnostics, human computer interaction designs, and etc. It is a challenging problem due to many factors such as variation in pigment melanin, uneven illumination, and differences in ethnicity geographics. Besides, age and gender introduce additional difficulties to the detection process. It is hard to determine whether a single pixel is skin or nonskin without considering the context. An efficient traditional hand-engineered skin color detection algorithm requires extensive work by domain experts. Recently, deep learning algorithms, especially convolutional neural networks (CNNs), have achieved great success in pixel-wise labeling tasks. However, CNN-based architectures are not sufficient for modeling the relationship between pixels and their neighbors. In this letter, we integrate recurrent neural networks (RNNs) layers into the fully convolutional neural networks (FCNs), and develop an end-to-end network for human skin detection. In particular, FCN layers capture generic local features, while RNN layers model the semantic contextual dependencies in images. Experimental results on the COMPAQ and ECU skin datasets validate the effectiveness of the proposed approach, where RNN layers enhance the discriminative power of skin detection in complex background situations.", "title": "" } ]
[ { "docid": "db8cd016ec1ab0644aa32f68346db618", "text": "This paper presents SpanDex, a set of extensions to Android’s Dalvik virtual machine that ensures apps do not leak users’ passwords. The primary technical challenge addressed by SpanDex is precise, sound, and efficient handling of implicit information flows (e.g., information transferred by a program’s control flow). SpanDex handles implicit flows by borrowing techniques from symbolic execution to precisely quantify the amount of information a process’ control flow reveals about a secret. To apply these techniques at runtime without sacrificing performance, SpanDex runs untrusted code in a data-flow sensitive sandbox, which limits the mix of operations that an app can perform on sensitive data. Experiments with a SpanDex prototype using 50 popular Android apps and an analysis of a large list of leaked passwords predicts that for 90% of users, an attacker would need over 80 login attempts to guess their password. Today the same attacker would need only one attempt for all users.", "title": "" }, { "docid": "6f872a7e9620cff3b1cc4b75a04b09a5", "text": "Effective management of asthma and other respiratory diseases requires constant monitoring and frequent data collection using a spirometer and longitudinal analysis. However, even after three decades of clinical use, there are very few personalized spirometers available on the market, especially those connecting to smartphones. To address this problem, we have developed mobileSpiro, a portable, low-cost spirometer intended for patient self-monitoring. The mobileSpiro API, and the accompanying Android application, interfaces with the spirometer hardware to capture, process and analyze the data. Our key contributions are automated algorithms on the smartphone which play a technician's role in detecting erroneous patient maneuvers, ensuring data quality, and coaching patients with easy-to-understand feedback, all packaged as an Android app. We demonstrate that mobileSpiro is as accurate as a commercial ISO13485 device, with an inter-device deviation in flow reading of less than 8%, and detects more than 95% of erroneous cough maneuvers in a public CDC dataset.", "title": "" }, { "docid": "e708fc43b5ac8abf8cc2707195e8a45e", "text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.", "title": "" }, { "docid": "e464cde1434026c17b06716c6a416b7a", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" }, { "docid": "8383cd262477e2b80c57742229c9dd64", "text": "Pie charts and their variants are prevalent in business settings and many other uses, even if they are not popular with the academic community. In a recent study, we found that contrary to general belief, there is no clear evidence that these charts are read based on the central angle. Instead, area and arc length appear to be at least equally important. In this paper, we build on that study to test several pie chart variations that are popular in information graphics: exploded pie chart, pie with larger slice, elliptical pie, and square pie (in addition to a regular pie chart used as the baseline). We find that even variants that do not distort central angle cause greater error than regular pie charts. Charts that distort the shape show the highest error. Many of our predictions based on the previous study’s results are borne out by this study’s findings.", "title": "" }, { "docid": "30bc7923529eec5ac7d62f91de804f8e", "text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.", "title": "" }, { "docid": "bc66054dc60a0b8de2d6e0b769240272", "text": "In this paper, we present the idea and methodologies on predicting the age span of users over microblog dataset. Given a user’s personal information such as user tags, job, education, self-description, and gender, as well as the content of his/her microblogs, we automatically classify the user’s age into one of four predefined ranges. Particularly, we extract a set of features from the given information about the user, and employ a statistic-based framework to solve this problem. The measurement shows that our proposed method incorporating selected features has an accuracy of around 71% on average over the training dataset.", "title": "" }, { "docid": "15ada8f138d89c52737cfb99d73219f0", "text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.", "title": "" }, { "docid": "82ef80d6257c5787dcf9201183735497", "text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.", "title": "" }, { "docid": "52afd42744f96b3c6492186c9ddd16a6", "text": "Structured hourly nurse rounding is an effective method to improve patient satisfaction and clinical outcomes. This program evaluation describes outcomes related to the implementation of hourly nurse rounding in one medical-surgical unit in a large community hospital. Overall Hospital Consumer Assessment of Healthcare Providers and Systems domain scores increased with the exception of responsiveness of staff. Patient falls and hospital-acquired pressure ulcers decreased during the project period.", "title": "" }, { "docid": "cb39f6ac5646e733604902a4b74b797c", "text": "In this paper, we present a generative model based approach to solve the multi-view stereo problem. The input images are considered to be generated by either one of two processes: (i) an inlier process, which generates the pixels which are visible from the reference camera and which obey the constant brightness assumption, and (ii) an outlier process which generates all other pixels. Depth and visibility are jointly modelled as a hiddenMarkov Random Field, and the spatial correlations of both are explicitly accounted for. Inference is made tractable by an EM-algorithm, which alternates between estimation of visibility and depth, and optimisation of model parameters. We describe and compare two implementations of the E-step of the algorithm, which correspond to the Mean Field and Bethe approximations of the free energy. The approach is validated by experiments on challenging real-world scenes, of which two are contaminated by independently moving objects.", "title": "" }, { "docid": "2faa73eec710382a6f3d658562bf7928", "text": "We appreciate the comments provided by Thompson et al. in their Letter to the Editor, regarding our study BThe myth: in vivo degradation of polypropylene-based meshes^ [1]. However, we question the motives of the authors, who have notably disclosed that they provide medicolegal testimony on behalf of the plaintiffs in mesh litigation, for bringing their courtroom rhetoric into this discussion. Thompson et al. grossly erred in claiming that we only analyzed the exposed surface of the explants, and not the flaked material that had been removed when cleaning the explants (Bremoved material^) and ended up in the cleaning solution. As stated in our paper, however, the flaked material was analyzed using light microscopy (LM), scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) microscopy before cleaning and after each of the five sequences of the overall cleaning process. Analyzing the cleaning solution would be redundant and therefore serve no purpose, i.e., the material on the surface was already analyzed and then ended up in the cleaning solution. Based on our chemical and microscopic analyses (LM, SEM, and FTIR), we concluded that the explanted Prolene meshes that we examined did not degrade or oxidize in vivo. Thompson et al. noted that that there are Bwell over 100 peer-reviewed articles, accepting or describing the degradation of PP [polypropylene] in variable conditions and degradation of other implantable polymers in the body .̂ They also claimed that they are not aware of any other peer-reviewed journal articles supporting the notion that PP does not degrade in the body. As stated in our paper, it is well documented that unstabilized PP oxidizes readily under ultraviolet (UV) light and upon exposure to high temperatures. However, as we also discuss and cite to in our paper, properly formulated PP is stable in oxidizing media, including elevated temperatures, in in vivo applications, and to a lesser extent, under UV light. Thompson et al. further claimed that our study Bdoes not explain the multiple features of PP degradation reported in the literature.^ This is an erroneous statement because they must have either failed to review or chose to ignore the discussion of the literature in our paper. For instance, the literature is replete with the chemistry of PP degradation, confirming simultaneous production of carbonyl groups and loss of molecular weight. It is well known chemistry that oxidative degradation of PP produces carbonyl groups and if there is no carbonyl group formation, there is no oxidative degradation. To further highlight this point, Clavé et al. [2] have often been cited as supporting the notion that PP degrades in vivo, and as discussed in our manuscript, their findings and statements in the study confirmed that they were unable to prove the existence of PP degradation from any of their various tests. They further failed to include that Liebert’s investigation reported explicitly that stabilized PP, such as Prolene, did not degrade. Thompson et al. also claimed that the degradation process for PP continues until no more PP can be oxidized, with the corresponding appearance of external surface features and hardening and shrinkage of the material. The fallacy of their statement, in the context of the explanted meshes that we examined, is highlighted by the clean fibers that retained their manufacturing extrusion lines and the lack of a wide range of crack morphology (e.g., varying crack depths into the core of the PP fibers) for a given explant and across explants from different patients with different implantation durations. This reply refers to the comment available at doi:10.1007/s00192-016-3233-z.", "title": "" }, { "docid": "1fba9ed825604e8afde8459a3d3dc0c0", "text": "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.", "title": "" }, { "docid": "fec4f80f907d65d4b73480b9c224d98a", "text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.", "title": "" }, { "docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0", "text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.", "title": "" }, { "docid": "0ebcd0c087454a9812ee54a0cd71a1a9", "text": "In this paper, we present the Smart City Architecture developed in the context of the ARTEMIS JU SP3 SOFIA project. It is an Event Driven Architecture that allows the management and cooperation of heterogeneous sensors for monitoring public spaces. The main components of the architecture are implemented in a testbed on a subway scenario with the objective to demonstrate that our proposed solution, can enhance the detection of anomalous events and simplify both the operators tasks and the communications to passengers in case of emergency.", "title": "" }, { "docid": "21dd193ec6849fa78ba03333708aebea", "text": "Since the inception of Bitcoin technology, its underlying data structureâĂŞ-the blockchainâĂŞ-has garnered much attention due to properties such as decentralization, transparency, and immutability. These properties make blockchains suitable for apps that require disintermediation through trustless exchange, consistent and incorruptible transaction records, and operational models beyond cryptocurrency. In particular, blockchain and its programmable smart contracts have the potential to address healthcare interoperability issues, such as enabling effective interactions between users and medical applications, delivering patient data securely to a variety of organizations and devices, and improving the overall efficiency of medical practice workflow. Despite the interest in using blockchain technology for healthcare interoperability, however, little information is available on the concrete architectural styles and recommendations for designing blockchain-based apps targeting healthcare. This paper provides an initial step in filling this gap by showing: (1) the features and implementation challenges in healthcare interoperability, (2) an end-to-end case study of a blockchain-based healthcare app that we are developing, and (3) how designing blockchain-based apps using familiar software patterns can help address healthcare specific challenges.", "title": "" }, { "docid": "456fd41267a82663fee901b111ff7d47", "text": "The tagging of Named Entities, the names of particular things or classes, is regarded as an important component technology for many NLP applications. The first Named Entity set had 7 types, organization, location, person, date, time, money and percent expressions. Later, in the IREX project artifact was added and ACE added two, GPE and facility, to pursue the generalization of the technology. However, 7 or 8 kinds of NE are not broad enough to cover general applications. We proposed about 150 categories of NE (Sekine et al. 2002) and now we have extended it again to 200 categories. Also we have developed dictionaries and an automatic tagger for NEs in Japanese.", "title": "" }, { "docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1", "text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.", "title": "" } ]
scidocsrr
8e0d73df450e50012dccc681672d87f1
Adversarial Message Passing For Graphical Models
[ { "docid": "234acba61dacec90d771a396f04e19f8", "text": "Image Super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high-resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation, often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible. A more desirable approach would employ Maximum a Posteriori (MAP) inference, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non-trivial, as it requires us to build a model for the image prior from samples. Furthermore, MAP inference is often performed via optimisationbased iterative algorithms which don’t compare well with the efficiency of neuralnetwork-based alternatives. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood-trained image prior. Our experiments show that the GAN based approach performs best on real image data, achieving particularly good results in photo-realistic texture SR.", "title": "" }, { "docid": "a33cf416cf48f67cd0a91bf3a385d303", "text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.", "title": "" }, { "docid": "9b9181c7efd28b3e407b5a50f999840a", "text": "As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines. Introduction Generating sequential synthetic data that mimics the real one is an important problem in unsupervised learning. Recently, recurrent neural networks (RNNs) with long shortterm memory (LSTM) cells (Hochreiter and Schmidhuber 1997) have shown excellent performance ranging from natural language generation to handwriting generation (Wen et al. 2015; Graves 2013). The most common approach to training an RNN is to maximize the log predictive likelihood of each true token in the training sequence given the previous observed tokens (Salakhutdinov 2009). However, as argued in (Bengio et al. 2015), the maximum likelihood approaches suffer from so-called exposure bias in the inference stage: the model generates a sequence iteratively and predicts next token conditioned on its previously predicted ones that may be never observed in the training data. Such a discrepancy between training and inference can incur accumulatively along with the sequence and will become prominent ∗Weinan Zhang is the corresponding author. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. as the length of sequence increases. To address this problem, (Bengio et al. 2015) proposed a training strategy called scheduled sampling (SS), where the generative model is partially fed with its own synthetic data as prefix (observed tokens) rather than the true data when deciding the next token in the training stage. Nevertheless, (Huszár 2015) showed that SS is an inconsistent training strategy and fails to address the problem fundamentally. Another possible solution of the training/inference discrepancy problem is to build the loss function on the entire generated sequence instead of each transition. For instance, in the application of machine translation, a task specific sequence score/loss, bilingual evaluation understudy (BLEU) (Papineni et al. 2002), can be adopted to guide the sequence generation. However, in many other practical applications, such as poem generation (Zhang and Lapata 2014) and chatbot (Hingston 2009), a task specific loss may not be directly available to score a generated sequence accurately. General adversarial net (GAN) proposed by (Goodfellow and others 2014) is a promising framework for alleviating the above problem. Specifically, in GAN a discriminative net D learns to distinguish whether a given data instance is real or not, and a generative net G learns to confuse D by generating high quality data. This approach has been successful and been mostly applied in computer vision tasks of generating samples of natural images (Denton et al. 2015). Unfortunately, applying GAN to generating sequences has two problems. Firstly, GAN is designed for generating real-valued, continuous data but has difficulties in directly generating sequences of discrete tokens, such as texts (Huszár 2015). The reason is that in GANs, the generator starts with random sampling first and then a determistic transform, govermented by the model parameters. As such, the gradient of the loss from D w.r.t. the outputs by G is used to guide the generative model G (paramters) to slightly change the generated value to make it more realistic. If the generated data is based on discrete tokens, the “slight change” guidance from the discriminative net makes little sense because there is probably no corresponding token for such slight change in the limited dictionary space (Goodfellow 2016). Secondly, GAN can only give the score/loss for an entire sequence when it has been generated; for a partially generated sequence, it is non-trivial to balance how good as it is now and the future score as the entire sequence. ar X iv :1 60 9. 05 47 3v 6 [ cs .L G ] 2 5 A ug 2 01 7 In this paper, to address the above two issues, we follow (Bachman and Precup 2015; Bahdanau et al. 2016) and consider the sequence generation procedure as a sequential decision making process. The generative model is treated as an agent of reinforcement learning (RL); the state is the generated tokens so far and the action is the next token to be generated. Unlike the work in (Bahdanau et al. 2016) that requires a task-specific sequence score, such as BLEU in machine translation, to give the reward, we employ a discriminator to evaluate the sequence and feedback the evaluation to guide the learning of the generative model. To solve the problem that the gradient cannot pass back to the generative model when the output is discrete, we regard the generative model as a stochastic parametrized policy. In our policy gradient, we employ Monte Carlo (MC) search to approximate the state-action value. We directly train the policy (generative model) via policy gradient (Sutton et al. 1999), which naturally avoids the differentiation difficulty for discrete data in a conventional GAN. Extensive experiments based on synthetic and real data are conducted to investigate the efficacy and properties of the proposed SeqGAN. In our synthetic data environment, SeqGAN significantly outperforms the maximum likelihood methods, scheduled sampling and PG-BLEU. In three realworld tasks, i.e. poem generation, speech language generation and music generation, SeqGAN significantly outperforms the compared baselines in various metrics including human expert judgement. Related Work Deep generative models have recently drawn significant attention, and the ability of learning over large (unlabeled) data endows them with more potential and vitality (Salakhutdinov 2009; Bengio et al. 2013). (Hinton, Osindero, and Teh 2006) first proposed to use the contrastive divergence algorithm to efficiently training deep belief nets (DBN). (Bengio et al. 2013) proposed denoising autoencoder (DAE) that learns the data distribution in a supervised learning fashion. Both DBN and DAE learn a low dimensional representation (encoding) for each data instance and generate it from a decoding network. Recently, variational autoencoder (VAE) that combines deep learning with statistical inference intended to represent a data instance in a latent hidden space (Kingma and Welling 2014), while still utilizing (deep) neural networks for non-linear mapping. The inference is done via variational methods. All these generative models are trained by maximizing (the lower bound of) training data likelihood, which, as mentioned by (Goodfellow and others 2014), suffers from the difficulty of approximating intractable probabilistic computations. (Goodfellow and others 2014) proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a minimax game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation (Denton et al. 2015). However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation (Huszár 2015). This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation (Goodfellow 2016). On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation (Sutskever, Vinyals, and Le 2014; Bahdanau, Cho, and Bengio 2014). The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas (Bengio et al. 2015) pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later (Huszár 2015) theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently. As pointed out by (Bachman and Precup 2015), the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods (Sutton et al. 1999) can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation (Sutskever, Vinyals, and Le 2014), the reward signal is meaningful only for the entire sequence, for instance in the game of Go (Silver et al. 2016), the reward signal is only set at the end of the game. In", "title": "" } ]
[ { "docid": "4753890e95974bc9f7d795ded183fa89", "text": "Large scale knowledge bases systems are difficult and expensive to construct. If we could share knowledge across systems, costs would be reduced. However, because knowledge bases are typically constructed from scratch, each with their own idiosyncratic structure, sharing is difficult. Recent research has focused on the use of ontologies to promote sharing. An ontology is a hierarchically structured set of terms for describing a domain that can be used as a skeletal foundation for a knowledge base. If two knowledge bases are built on a common ontology, knowledge can be more readily shared, since they share a common underlying structure. This paper outlines a set of desiderata for ontologies, and then describes how we have used a large-scale (50,000+ concept) ontology develop a specialized, domain-specific ontology semiautomatically. We then discuss the relation between ontologies and the process of developing a system, arguing that to be useful, an ontology needs to be created as a \"living document\", whose development is tightly integrated with the system’s. We conclude with a discussion of Web-based ontology tools we are developing to support this approach.", "title": "" }, { "docid": "64fb3fdb4f37ee75b1506c2fdb09cf7a", "text": "With the proliferation of mobile devices, cloud-based photo sharing and searching services are becoming common du e to the mobile devices’ resource constrains. Meanwhile, the r is also increasing concern about privacy in photos. In this wor k, we present a framework SouTu, which enables cloud servers to provide privacy-preserving photo sharing and search as a se rvice to mobile device users. Privacy-seeking users can share the ir photos via our framework to allow only their authorized frie nds to browse and search their photos using resource-bounded mo bile devices. This is achieved by our carefully designed archite cture and novel outsourced privacy-preserving computation prot ocols, through which no information about the outsourced photos or even the search contents (including the results) would be revealed to the cloud servers. Our framework is compatible with most of the existing image search technologies, and it requi res few changes to the existing cloud systems. The evaluation of our prototype system with 31,772 real-life images shows the communication and computation efficiency of our system.", "title": "" }, { "docid": "1450854a32ea6c18f4cc817f686aaf15", "text": "This article reports on the development of two measures relating to historical trauma among American Indian people: The Historical Loss Scale and The Historical Loss Associated Symptoms Scale. Measurement characteristics including frequencies, internal reliability, and confirmatory factor analyses were calculated based on 143 American Indian adult parents of children aged 10 through 12 years who are part of an ongoing longitudinal study of American Indian families in the upper Midwest. Results indicate both scales have high internal reliability. Frequencies indicate that the current generation of American Indian adults have frequent thoughts pertaining to historical losses and that they associate these losses with negative feelings. Two factors of the Historical Loss Associated Symptoms Scale indicate one anxiety/depression component and one anger/avoidance component. The results are discussed in terms of future research and theory pertaining to historical trauma among American Indian people.", "title": "" }, { "docid": "942da03bcd01ecdcb7e1334940c7c549", "text": "This paper introduces three classic models of statistical topic models: Latent Semantic Indexing (LSI), Probabilistic Latent Semantic Indexing (PLSI) and Latent Dirichlet Allocation (LDA). Then a method of text classification based on LDA model is briefly described, which uses LDA model as a text representation method. Each document means a probability distribution of fixed latent topic sets. Next, Support Vector Machine (SVM) is chose as classification algorithm. Finally, the evaluation parameters in classification system of LDA with SVM are higher than other two methods which are LSI with SVM and VSM with SVM, showing a better classification performance.", "title": "" }, { "docid": "1c3af13e29fc8a1cea5ee821d62b86f0", "text": "Cellular and 802.11 WiFi are compelling options for mobile Internet connectivity. The goal of our work is to understand the performance afforded by each of these technologies in diverse environments and use conditions. In this paper, we compare and contrast cellular and WiFi performance using crowd-sourced data from Speedtest.net. Our study considers spatio-temporal performance (upload/download throughput and latency) using over 3 million user-initiated tests from iOS and Android apps in 15 different metro areas collected over a 15 week period. Our basic performance comparisons show that (i) WiFi provides better absolute download/upload throughput, and a higher degree of consistency in performance; (ii) WiFi networks generally deliver lower absolute latency, but the consistency in latency is often better with cellular access; (iii) throughput and latency vary widely depending on the particular access type e.g., HSPA, EVDO, LTE, WiFi, etc.) and service provider. More broadly, our results show that performance consistency for cellular and WiFi is much lower than has been reported for wired broadband. Temporal analysis shows that average performance for cell and WiFi varies with time of day, with the best performance for large metro areas coming at non-peak hours. Spatial analysis shows that performance is highly variable across metro areas, but that there are subregions that offer consistently better performance for cell or WiFi. Comparisons between metro areas show that larger areas provide higher throughput and lower latency than smaller metro areas, suggesting where ISPs have focused their deployment efforts. Finally, our analysis reveals diverse performance characteristics resulting from the rollout of new cell access technologies and service differences among local providers.", "title": "" }, { "docid": "6081bf3a4f6e742ffc834a384223d66d", "text": "According to the vision of the society to brand trust and brand loyalty, this study is conducted to \"investigate the effective factors on the loyalty to the brand in social media with a case study of Samsung brand\". This research is important and necessary because we can obtain predictions and plans by the achieved results for the society and the influence of virtual social media as a non-native technology. Therefore, the focus of this research is on the relationships of customers whom use social media and the effects of these relationships on brand trust and brand loyalty in brand society. In this research, descriptive, correlational and causal –comparative methods are used. And users of social media in Tehran city were considered as statistical population, because this statistical population is infinite, 384 samples were selected by simple random method and were studied by standard questionnaire of Michele Laruch et al (2012). Expert’s approval and Cronbach's alpha coefficient test with value of 0.922 and Splithalf method with the value of 0.920 were used to measure the validity and reliability of the questionnaire. Then, SPSS software, and uni-variate and multivariate linear regression analysis were used to calculate the effect of each independent variable on the dependent variable and the relationship between them. The obtained results show that social media has positive effects on customer-product, customer-brand, customer-company, customer-other customer’s relationships which in turn has a positive effect on brand trust, and brand trust has positive effects brand loyalty. We have found that brand trust has a quite intermediate role in changing the effects of improved relationships in brand society to brand loyalty. © 2015 Bull. Georg. Natl.Acad. Sci.", "title": "" }, { "docid": "0f2caa9b91c2c180cbfbfcc25941f78e", "text": "BACKGROUND\nSevere mitral annular calcification causing degenerative mitral stenosis (DMS) is increasingly encountered in patients undergoing mitral and aortic valve interventions. However, its clinical profile and natural history and the factors affecting survival remain poorly characterized. The goal of this study was to characterize the factors affecting survival in patients with DMS.\n\n\nMETHODS\nAn institutional echocardiographic database was searched for patients with DMS, defined as severe mitral annular calcification without commissural fusion and a mean transmitral diastolic gradient of ≥2 mm Hg. This resulted in a cohort of 1,004 patients. Survival was analyzed as a function of clinical, pharmacologic, and echocardiographic variables.\n\n\nRESULTS\nThe patient characteristics were as follows: mean age, 73 ± 14 years; 73% women; coronary artery disease in 49%; and diabetes mellitus in 50%. The 1- and 5-year survival rates were 78% and 47%, respectively, and were slightly worse with higher DMS grades (P = .02). Risk factors for higher mortality included greater age (P < .0001), atrial fibrillation (P = .0009), renal insufficiency (P = .004), mitral regurgitation (P < .0001), tricuspid regurgitation (P < .0001), elevated right atrial pressure (P < .0001), concomitant aortic stenosis (P = .02), and low serum albumin level (P < .0001). Adjusted for propensity scores, use of renin-angiotensin system blockers (P = .02) or statins (P = .04) was associated with better survival, and use of digoxin was associated with higher mortality (P = .007).\n\n\nCONCLUSIONS\nPrognosis in patients with DMS is poor, being worse in the aged and those with renal insufficiency, atrial fibrillation, and other concomitant valvular lesions. Renin-angiotensin system blockers and statins may confer a survival benefit, and digoxin use may be associated with higher mortality in these patients.", "title": "" }, { "docid": "4cef84bb3a1ff5f5ed64a4149d501f57", "text": "In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is the intelligence exhibited by machines or software. It is the subfield of computer science. Artificial Intelligence is becoming a popular field in computer science as it has enhanced the human life in many areas. Artificial intelligence in the last two decades has greatly improved performance of the manufacturing and service systems. Study in the area of artificial intelligence has given rise to the rapidly growing technology known as expert system. Application areas of Artificial Intelligence is having a huge impact on various fields of life as expert system is widely used these days to solve the complex problems in various areas as science, engineering, business, medicine, weather forecasting. The areas employing the technology of Artificial Intelligence have seen an increase in the quality and efficiency. This paper gives an overview of this technology and the application areas of this technology. This paper will also explore the current use of Artificial Intelligence technologies in the PSS design to damp the power system oscillations caused by interruptions, in Network Intrusion for protecting computer and communication networks from intruders, in the medical areamedicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it and in the computer games.", "title": "" }, { "docid": "d6d275b719451982fa67d442c55c186c", "text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.", "title": "" }, { "docid": "4003b1a03be323c78e98650895967a07", "text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.", "title": "" }, { "docid": "6dbabfe7370b19c55a52671c82c3e3c8", "text": "The development of a compact circular polarization Orthomode Trasducer (OMT) working in two frequency bands with dual circular polarization (RHCP & LHCP) is presented. The device covers the complete communication spectrum allocated at C-band. At the same time, the device presents high power handling capability and very low mass and envelope size. The OMT plus a feed horn are used to illuminate a Reflector antenna, the surface of which is shaped to provide domestic or regional coverage from geostationary orbit. The full band operation increases the earth-satellite communication capability. The paper will show the OMT selected architecture, the RF performances at unit level and at component level. RF power aspects like multipaction and PIM are addressed. This development was performed under European Space Agency ESA ARTES-4 program.", "title": "" }, { "docid": "2876086e4431e8607d5146f14f0c29dc", "text": "Vascular ultrasonography has an important role in the diagnosis and management of venous disease. The venous system, however, is more complex and variable compared to the arterial system due to its frequent anatomical variations. This often becomes quite challenging for sonographers. This paper discusses the anatomy of the long saphenous vein and its anatomical variations accompanied by sonograms and illustrations.", "title": "" }, { "docid": "17663e43a26892d78f52abe4bceb8a28", "text": "This paper presents a project named PBMaster, which provides an open implementation of the Profibus DP (Process Field Bus Decentralized Peripherals). The project implements a software implementation of this very popular fieldbus used in factory automation. Most Profibus solutions, especially those implementing the master station, are based on ASICs, which require bespoke hardware to be built solely for the purpose of Profibus from the outset. Conversely, this software implementation can run on a wide range of hardware, where the UART and RS-485 standards are present.", "title": "" }, { "docid": "3460dbea27f1de0f13636c04bbfb2569", "text": "The secret keys of critical network authorities -- such as time, name, certificate, and software update services -- represent high-value targets for hackers, criminals, and spy agencies wishing to use these keys secretly to compromise other hosts. To protect authorities and their clients proactively from undetected exploits and misuse, we introduce CoSi, a scalable witness cosigning protocol ensuring that every authoritative statement is validated and publicly logged by a diverse group of witnesses before any client will accept it. A statement S collectively signed by W witnesses assures clients that S has been seen, and not immediately found erroneous, by those W observers. Even if S is compromised in a fashion not readily detectable by the witnesses, CoSi still guarantees S's exposure to public scrutiny, forcing secrecy-minded attackers to risk that the compromise will soon be detected by one of the W witnesses. Because clients can verify collective signatures efficiently without communication, CoSi protects clients' privacy, and offers the first transparency mechanism effective against persistent man-in-the-middle attackers who control a victim's Internet access, the authority's secret key, and several witnesses' secret keys. CoSi builds on existing cryptographic multisignature methods, scaling them to support thousands of witnesses via signature aggregation over efficient communication trees. A working prototype demonstrates CoSi in the context of timestamping and logging authorities, enabling groups of over 8,000 distributed witnesses to cosign authoritative statements in under two seconds.", "title": "" }, { "docid": "37d36c930f6cf75d469aa27a8cd7f48f", "text": "Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.", "title": "" }, { "docid": "a6acba54f34d1d101f4abb00f4fe4675", "text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.", "title": "" }, { "docid": "2802e8fd4d8df23d55dee9afac0f4177", "text": "Brain plasticity refers to the brain's ability to change structure and function. Experience is a major stimulant of brain plasticity in animal species as diverse as insects and humans. It is now clear that experience produces multiple, dissociable changes in the brain including increases in dendritic length, increases (or decreases) in spine density, synapse formation, increased glial activity, and altered metabolic activity. These anatomical changes are correlated with behavioral differences between subjects with and without the changes. Experience-dependent changes in neurons are affected by various factors including aging, gonadal hormones, trophic factors, stress, and brain pathology. We discuss the important role that changes in dendritic arborization play in brain plasticity and behavior, and we consider these changes in the context of changing intrinsic circuitry of the cortex in processes such as learning.", "title": "" }, { "docid": "9478efffef9b34aa43a3e69765a48507", "text": "Digital chaotic ciphers have been investigated for more than a decade. However, their overall performance in terms of the tradeoff between security and speed, as well as the connection between chaos and cryptography, has not been sufficiently addressed. We propose a chaotic Feistel cipher and a chaotic uniform cipher. Our plan is to examine crypto components from both dynamical-system and cryptographical points of view, thus to explore connection between these two fields. In the due course, we also apply dynamical system theory to create cryptographically secure transformations and evaluate cryptographical security measures", "title": "" }, { "docid": "0afbce731c55b9a3d3ced22ad59aa0ef", "text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.", "title": "" }, { "docid": "efb124a26b0cdc9b022975dd83ec76c8", "text": "Apache Spark is an open-source cluster computing framework for big data processing. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce's linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala (and shortly R), and its core data abstraction, the distributed data frame, and it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing. This tutorial will provide an accessible introduction to Spark and its potential to revolutionize academic and commercial data science practices.", "title": "" } ]
scidocsrr
0f9308f3886928237fa9837f5f1e2293
Scenario-Based Analysis of Software Architecture
[ { "docid": "85180ac475de8437bde80a7dbbfc9759", "text": "Excellent book is always being the best friend for spending little time in your office, night time, bus, and everywhere. It will be a good way to just look, open, and read the book while in that time. As known, experience and skill don't always come with the much money to acquire them. Reading this book with the PDF object oriented software engineering a use case driven approach will let you know more things.", "title": "" } ]
[ { "docid": "4fe25c65a4fd1886018482aceb82ad6f", "text": "Article history: Received 21 March 2011 Revised 28 February 2012 Accepted 5 March 2012 Available online 26 March 2012 The purpose of this paper is (1) to identify critical issues in the current literature on ethical leadership — i.e., the conceptual vagueness of the construct itself and the focus on a Western-based perspective; and (2) to address these issues and recent calls for more collaboration between normative and empirical-descriptive inquiry of ethical phenomena by developing an interdisciplinary integrative approach to ethical leadership. Based on the analysis of similarities between Western and Eastern moral philosophy and ethics principles of the world religions, the present approach identifies four essential normative reference points of ethical leadership— the four central ethical orientations: (1) humane orientation, (2) justice orientation, (3) responsibility and sustainability orientation, and (4) moderation orientation. Research propositions on predictors and consequences of leader expressions of the four central orientations are offered. Real cases of ethical leadership choices, derived from in-depth interviews with international leaders, illustrate how the central orientations play out in managerial practice. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d8a194a88ccf20b8160b75d930969c85", "text": "We describe the design and hardware implementation of our walking and manipulation controllers that are based on a cascade of online optimizations. A virtual force acting at the robot's center of mass (CoM) is estimated and used to compensated for modeling errors of the CoM and unplanned external forces. The proposed controllers have been implemented on the Atlas robot, a full size humanoid robot built by Boston Dynamics, and used in the DARPA Robotics Challenge Finals, which consisted of a wide variety of locomotion and manipulation tasks.", "title": "" }, { "docid": "1a0d0b0b38e6d6434448cee8959c58a8", "text": "This paper reports the first results of an investigation into solutions to problems of security in computer systems; it establishes the basis for rigorous investigation by providing a general descriptive model of a computer system. Borrowing basic concepts and constructs from general systems theory, we present a basic result concerning security in computer systems, using precise notions of \"security\" and \"compromise\". We also demonstrate how a change in requirements can be reflected in the resulting mathematical model. A lengthy introductory section is included in order to bridge the gap between general systems theory and practical problem solving. ii PREFACE General systems theory is a relatively new and rapidly growing mathematical discipline which shows great promise for application in the computer sciences. The discipline includes both \"general systems-theory\" and \"general-systems-theory\": that is, one may properly read the phrase \"general systems theory\" in both ways. In this paper, we have borrowed from the works of general systems theorists, principally from the basic work of Mesarovic´, to formulate a mathematical framework within which to deal with the problems of secure computer systems. At the present time we feel that the mathematical representation developed herein is adequate to deal with most if not all of the security problems one may wish to pose. In Section III we have given a result which deals with the most trivial of the secure computer systems one might find viable in actual use. In the concluding section we review the application of our mathematical methodology and suggest major areas of concern in the design of a secure system. The results reported in this paper lay the groundwork for further, more specific investigation into secure computer systems. The investigation will proceed by specializing the elements of the model to represent particular aspects of system design and operation. Such an investigation will be reported in the second volume of this series where we assume a system with centralized access control. A preliminary investigation of distributed access is just beginning; the results of that investigation would be reported in a third volume of the series.", "title": "" }, { "docid": "ad61c6474832ecbe671040dfcb64e6aa", "text": "This paper provides a brief overview on the recent advances of small-scale unmanned aerial vehicles (UAVs) from the perspective of platforms, key elements, and scientific research. The survey starts with an introduction of the recent advances of small-scale UAV platforms, based on the information summarized from 132 models available worldwide. Next, the evolvement of the key elements, including onboard processing units, navigation sensors, mission-oriented sensors, communication modules, and ground control station, is presented and analyzed. Third, achievements of small-scale UAV research, particularly on platform design and construction, dynamics modeling, and flight control, are introduced. Finally, the future of small-scale UAVs' research, civil applications, and military applications are forecasted.", "title": "" }, { "docid": "8f13fbf6de0fb0685b4a39ee5f3bb415", "text": "This review presents one of the eight theories of the quality of life (QOL) used for making the SEQOL (self-evaluation of quality of life) questionnaire or the quality of life as realizing life potential. This theory is strongly inspired by Maslow and the review furthermore serves as an example on how to fulfill the demand for an overall theory of life (or philosophy of life), which we believe is necessary for global and generic quality-of-life research. Whereas traditional medical science has often been inspired by mechanical models in its attempts to understand human beings, this theory takes an explicitly biological starting point. The purpose is to take a close view of life as a unique entity, which mechanical models are unable to do. This means that things considered to be beyond the individual's purely biological nature, notably the quality of life, meaning in life, and aspirations in life, are included under this wider, biological treatise. Our interpretation of the nature of all living matter is intended as an alternative to medical mechanism, which dates back to the beginning of the 20th century. New ideas such as the notions of the human being as nestled in an evolutionary and ecological context, the spontaneous tendency of self-organizing systems for realization and concord, and the central role of consciousness in interpreting, planning, and expressing human reality are unavoidable today in attempts to scientifically understand all living matter, including human life.", "title": "" }, { "docid": "cf51f466c72108d5933d070b307e5d6d", "text": "The study reported here follows the suggestion by Caplan et al. (Justice Q, 2010) that risk terrain modeling (RTM) be developed by doing more work to elaborate, operationalize, and test variables that would provide added value to its application in police operations. Building on the ideas presented by Caplan et al., we address three important issues related to RTM that sets it apart from current approaches to spatial crime analysis. First, we address the selection criteria used in determining which risk layers to include in risk terrain models. Second, we compare the ‘‘best model’’ risk terrain derived from our analysis to the traditional hotspot density mapping technique by considering both the statistical power and overall usefulness of each approach. Third, we test for ‘‘risk clusters’’ in risk terrain maps to determine how they can be used to target police resources in a way that improves upon the current practice of using density maps of past crime in determining future locations of crime occurrence. This paper concludes with an in depth exploration of how one might develop strategies for incorporating risk terrains into police decisionmaking. RTM can be developed to the point where it may be more readily adopted by police crime analysts and enable police to be more effectively proactive and identify areas with the greatest probability of becoming locations for crime in the future. The targeting of police interventions that emerges would be based on a sound understanding of geographic attributes and qualities of space that connect to crime outcomes and would not be the result of identifying individuals from specific groups or characteristics of people as likely candidates for crime, a tactic that has led police agencies to be accused of profiling. In addition, place-based interventions may offer a more efficient method of impacting crime than efforts focused on individuals.", "title": "" }, { "docid": "91f5c7b130a7eadef8df1b596cda1eaf", "text": "It is well-established that within crisis-related communications, rumors are likely to emerge. False rumors, i.e. misinformation, can be detrimental to crisis communication and response; it is therefore important not only to be able to identify messages that propagate rumors, but also corrections or denials of rumor content. In this work, we explore the task of automatically classifying rumor stances expressed in crisisrelated content posted on social media. Utilizing a dataset of over 4,300 manually coded tweets, we build a supervised machine learning model for this task, achieving an accuracy over 88% across a diverse set of rumors of different types.", "title": "" }, { "docid": "a25e2540e97918b954acbb6fdee57eb7", "text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.", "title": "" }, { "docid": "5b7ff78bc563c351642e5f316a6d895b", "text": "OBJECTIVE\nTo determine an albino population's expectations from an outreach albino clinic, understanding of skin cancer risk, and attitudes toward sun protection behavior.\n\n\nDESIGN\nSurvey, June 1, 1997, to September 30, 1997.\n\n\nSETTING\nOutreach albino clinics in Tanzania.\n\n\nPARTICIPANTS\nAll albinos 13 years and older and accompanying adults of younger children attending clinics. Unaccompanied children younger than 13 years and those too sick to answer questions were excluded. Ninety-four questionnaires were completed in 5 villages, with a 100% response rate.\n\n\nINTERVENTIONS\nInterview-based questionnaire with scoring system for pictures depicting poorly sun-protected albinos.\n\n\nRESULTS\nThe most common reasons for attending the clinic were health education and skin examination. Thirteen respondents (14%) believed albinism was inherited; it was more common to believe in superstitious causes of albinism than inheritance. Seventy-three respondents (78%) believed skin cancer was preventable, and 60 (63%) believed skin cancer was related to the sun. Seventy-two subjects (77%) thought sunscreen provided protection from the sun; 9 (10%) also applied it at night. Reasons for not wearing sun-protective clothing included fashion, culture, and heat. The hats provided were thought to have too soft a brim, to shrink, and to be ridiculed. Suggestions for additional clinic services centered on education and employment. Albinos who had read the educational booklet had no better understanding of sun avoidance than those who had not (P =.49).\n\n\nCONCLUSIONS\nThere was a reasonable understanding of risks of skin cancer and sun-avoidance methods. Clinical advice was often not followed for cultural reasons. The hats provided were unsuitable, and there was some confusion about the use of sunscreen. A lack of understanding of the cause of albinism led to many superstitions.", "title": "" }, { "docid": "e2af17b368fef36187c895ad5fd20a58", "text": "We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for k-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.", "title": "" }, { "docid": "a9fae3b86b21e40e71b99e5374cd3d4d", "text": "Motor vehicle collisions are an important cause of blunt abdominal trauma in pregnant woman. Among the possible outcomes of blunt abdominal trauma, placental abruption, direct fetal trauma, and rupture of the gravid uterus are described. An interesting case of complete fetal decapitation with uterine rupture due to a high-velocity motor vehicle collision is described. The external examination of the fetus showed a disconnection between the cervical vertebrae C3 and C4. The autopsy examination showed hematic infiltration of the epicranic soft tissues, an overlap of the parietal bones, and a subarachnoid hemorrhage in the posterior part of interparietal area. Histological analysis was carried out showing a lack of epithelium and hemorrhages in the subcutaneous tissue, a hematic infiltration between the muscular fibers of the neck and between the collagen and deep muscular fibers of the tracheal wall. Specimens collected from the placenta and from the uterus showed a hematic infiltration with hypotrophy of the placental villi, fibrosis of the mesenchymal villi with ischemic phenomena of the membrane. The convergence of circumstantial data, autopsy results, and histological data led us to conclude that the neck lesion was vital and the cause of death was attributed to the motor vehicle collision.", "title": "" }, { "docid": "7e61b5f63d325505209c3284c8a444a1", "text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.", "title": "" }, { "docid": "001d2da1fbdaf2c49311f6e68b245076", "text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.", "title": "" }, { "docid": "32e92e1be00613e06a7bc03d457704ac", "text": "Computer systems often fail due to many factors such as software bugs or administrator errors. Diagnosing such production run failures is an important but challenging task since it is difficult to reproduce them in house due to various reasons: (1) unavailability of users' inputs and file content due to privacy concerns; (2) difficulty in building the exact same execution environment; and (3) non-determinism of concurrent executions on multi-processors.\n Therefore, programmers often have to diagnose a production run failure based on logs collected back from customers and the corresponding source code. Such diagnosis requires expert knowledge and is also too time-consuming, tedious to narrow down root causes. To address this problem, we propose a tool, called SherLog, that analyzes source code by leveraging information provided by run-time logs to infer what must or may have happened during the failed production run. It requires neither re-execution of the program nor knowledge on the log's semantics. It infers both control and data value information regarding to the failed execution.\n We evaluate SherLog with 8 representative real world software failures (6 software bugs and 2 configuration errors) from 7 applications including 3 servers. Information inferred by SherLog are very useful for programmers to diagnose these evaluated failures. Our results also show that SherLog can analyze large server applications such as Apache with thousands of logging messages within only 40 minutes.", "title": "" }, { "docid": "f794d4a807a4d69727989254c557d2d1", "text": "The purpose of this study was to describe the operative procedures and clinical outcomes of a new three-column internal fixation system with anatomical locking plates on the tibial plateau to treat complex three-column fractures of the tibial plateau. From June 2011 to May 2015, 14 patients with complex three-column fractures of the tibial plateau were treated with reduction and internal fixation through an anterolateral approach combined with a posteromedial approach. The patients were randomly divided into two groups: a control group which included seven cases using common locking plates, and an experimental group which included seven cases with a new three-column internal fixation system with anatomical locking plates. The mean operation time of the control group was 280.7 ± 53.7 minutes, which was 215.0 ± 49.1 minutes in the experimental group. The mean intra-operative blood loss of the control group was 692.8 ± 183.5 ml, which was 471.4 ± 138.0 ml in the experimental group. The difference was statistically significant between the two groups above. The differences were not statistically significant between the following mean numbers of the two groups: Rasmussen score immediately after operation; active extension–flexion degrees of knee joint at three and 12 months post-operatively; tibial plateau varus angle (TPA) and posterior slope angle (PA) immediately after operation, at three and at 12 months post-operatively; HSS (The Hospital for Special Surgery) knee-rating score at 12 months post-operatively. All fractures healed. A three-column internal fixation system with anatomical locking plates on tibial plateau is an effective and safe tool to treat complex three-column fractures of the tibial plateau and it is more convenient than the common plate.", "title": "" }, { "docid": "1b581e17dad529b3452d3fbdcb1b3dd1", "text": "Authorship attribution is the task of identifying the author of a given text. The main concern of this task is to define an appropriate characterization of documents that captures the writing style of authors. This paper proposes a new method for authorship attribution supported on the idea that a proper identification of authors must consider both stylistic and topic features of texts. This method characterizes documents by a set of word sequences that combine functional and content words. The experimental results on poem classification demonstrated that this method outperforms most current state-of-the-art approaches, and that it is appropriate to handle the attribution of short documents.", "title": "" }, { "docid": "063389c654f44f34418292818fc781e7", "text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.", "title": "" }, { "docid": "08b2de5f1c6356c988ac9d6f09ca9a31", "text": "Novel conditions are derived that guarantee convergence of the sum-product algorithm (also known as loopy belief propagation or simply belief propagation (BP)) to a unique fixed point, irrespective of the initial messages, for parallel (synchronous) updates. The computational complexity of the conditions is polynomial in the number of variables. In contrast with previously existing conditions, our results are directly applicable to arbitrary factor graphs (with discrete variables) and are shown to be valid also in the case of factors containing zeros, under some additional conditions. The conditions are compared with existing ones, numerically and, if possible, analytically. For binary variables with pairwise interactions, sufficient conditions are derived that take into account local evidence (i.e., single-variable factors) and the type of pair interactions (attractive or repulsive). It is shown empirically that this bound outperforms existing bounds.", "title": "" }, { "docid": "cbe37cbe2234797a0e3625dbc5c98b68", "text": "This paper investigates a visual interaction system for vehicle-to-vehicle (V2V) platform, called V3I. Our system employs common visual cameras that are mounted on connected vehicles to perceive the existence of isolated vehicles in the same roadway, and provides human drivers with imagery situational awareness. This allows effective interactions between vehicles even with a low permeation rate of V2V devices. The underlying research problem for V3I includes two aspects: i) tracking isolated vehicles of interest over time through local cameras; ii) at each time-step fusing the results of local visual perceptions to obtain a global location map that involves both isolated and connected vehicles. In this paper, we introduce a unified probabilistic approach to solve the above two problems, i.e., tracking and localization, in a joint fashion. Our approach will explore both the visual features of individual vehicles in images and the pair-wise spatial relationships between vehicles. We develop a fast Markov Chain Monte Carlo (MCMC) algorithm to search the joint solution space efficiently, which enables real-time application. To evaluate the performance of the proposed approach, we collect and annotate a set of video sequences captured with a group of vehicle-resident cameras. Extensive experiments with comparisons clearly demonstrate that the proposed V3I approach can precisely recover the dynamic location map of the surrounding and thus enable direct visual interactions between vehicles .", "title": "" }, { "docid": "eb59f239621dde59a13854c5e6fa9f54", "text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: Pierre.Dupont@uclouvain.be Phone: +32 10 47 91 14 Fax: +32 10 45 03 45", "title": "" } ]
scidocsrr
c5866cd38e9fb246e011b3ca468f5fc4
After Sandy Hook Elementary: A Year in the Gun Control Debate on Twitter
[ { "docid": "b5004502c5ce55f2327e52639e65d0b6", "text": "Public health applications using social media often require accurate, broad-coverage location information. However, the standard information provided by social media APIs, such as Twitter, cover a limited number of messages. This paper presents Carmen, a geolocation system that can determine structured location information for messages provided by the Twitter API. Our system utilizes geocoding tools and a combination of automatic and manual alias resolution methods to infer location structures from GPS positions and user-provided profile data. We show that our system is accurate and covers many locations, and we demonstrate its utility for improving influenza surveillance.", "title": "" } ]
[ { "docid": "210ec3c86105f496087c7b012619e1d3", "text": "An ultra compact projection system based on a high brightness OLEd micro display is developed. System design and realization of a prototype are presented. This OLEd pico projector with a volume of about 10 cm3 can be integrated into portable systems like mobile phones or PdAs. The Fraunhofer IPMS developed the high brightness monochrome OLEd micro display. The Fraunhofer IOF desig­ ned the specific projection lens [1] and in tegrated the OLEd and the projection optic to a full functional pico projection system. This article provides a closer look on the technology and its possibilities.", "title": "" }, { "docid": "f7d56588da8f5c5ac0f1481e5f2286b4", "text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.", "title": "" }, { "docid": "86dd65bddeb01d4395b81cef0bc4f00e", "text": "Many people may see the development of software and hardware like different disciplines. However, there are great similarities between them that have been shown due to the appearance of extensions for general purpose programming languages for its use as hardware description languages. In this contribution, the approach proposed by the MyHDL package to use Python as an HDL is analyzed by making a comparative study. This study is based on the independent application of Verilog and Python based flows to the development of a real peripheral. The use of MyHDL has revealed to be a powerful and promising tool, not only because of the surprising results, but also because it opens new horizons towards the development of new techniques for modeling and verification, using the full power of one of the most versatile programming languages nowadays.", "title": "" }, { "docid": "d2e0c8db8724b25a646e2c1f24f395bc", "text": "US Presidential election is an event anticipated by US citizens and people around the world. By utilizing the big data provided by social media, this research aims to make a prediction of the party or candidate that will win the US presidential election 2016. This paper proposes two stages in research methodology which is data collection and implementation. Data used in this research are collected from Twitter. The implementation stage consists of preprocessing, sentiment analysis, aggregation, and implementation of Electoral College system to predict the winning party or candidate. The implementation of Electoral College will be limited only by using winner take all basis for all states. The implementations are referring from previous works with some addition of methods. The proposed method still unable to use real time data due to random user location value gathered from Twitter REST API, and researchers will be working on it for future works.", "title": "" }, { "docid": "fe697283a3e08f04d439ffaeb11746e9", "text": "Visual Question Answering (VQA) has attracted attention from both computer vision and natural language processing communities. Most existing approaches adopt the pipeline of representing an image via pre-trained CNNs, and then using the uninterpretable CNN features in conjunction with the question to predict the answer. Although such end-to-end models might report promising performance, they rarely provide any insight, apart from the answer, into the VQA process. In this work, we propose to break up the end-to-end VQA into two steps: explaining and reasoning, in an attempt towards a more explainable VQA by shedding light on the intermediate results between these two steps. To that end, we first extract attributes and generate descriptions as explanations for an image using pre-trained attribute detectors and image captioning models, respectively. Next, a reasoning module utilizes these explanations in place of the image to infer an answer to the question. The advantages of such a breakdown include: (1) the attributes and captions can reflect what the system extracts from the image, thus can provide some explanations for the predicted answer; (2) these intermediate results can help us identify the inabilities of both the image understanding part and the answer inference part when the predicted answer is wrong. We conduct extensive experiments on a popular VQA dataset and dissect all results according to several measurements of the explanation quality. Our system achieves comparable performance with the state-of-theart, yet with added benefits of explanability and the inherent ability to further improve with higher quality explanations.", "title": "" }, { "docid": "6469b318a84d5865e304a8afd4408cfa", "text": "5-hydroxytryptamine (5-HT, serotonin) is an ancient biochemical manipulated through evolution to be utilized extensively throughout the animal and plant kingdoms. Mammals employ 5-HT as a neurotransmitter within the central and peripheral nervous systems, and also as a local hormone in numerous other tissues, including the gastrointestinal tract, the cardiovascular system and immune cells. This multiplicity of function implicates 5-HT in a vast array of physiological and pathological processes. This plethora of roles has consequently encouraged the development of many compounds of therapeutic value, including various antidepressant, antipsychotic and antiemetic drugs.", "title": "" }, { "docid": "148f27fdea734cf4ae50d38caca94827", "text": "This paper discusses a personalized heart monitoring system using smart phones and wireless (bio) sensors. We combine ubiquitous computing with mobile health technology to monitor the wellbeing of high risk cardiac patients. The smart phone analyses in real-time the ECG data and determines whether the person needs external help. We focus on two life threatening arrhythmias: ventricular fibrillation (VF) and ventricular tachycardia (VT). The smart phone can automatically alert the ambulance and pre assigned caregivers when a VF/VT arrhythmia is detected. The system can be personalized to the needs and requirements of the patient. It can be used to give advice (e.g. exercise more) or to reassure the patient when the bio-sensors and environmental data are within predefined ranges", "title": "" }, { "docid": "a07338beeb3246954815e0389c59ae29", "text": "We have proposed gate-all-around Silicon nanowire MOSFET (SNWFET) on bulk Si as an ultimate transistor. Well controlled processes are used to achieve gate length (LG) of sub-10nm and narrow nanowire widths. Excellent performance with reasonable VTH and short channel immunity are achieved owing to thin nanowire channel, self-aligned gate, and GAA structure. Transistor performance with gate length of 10nm has been demonstrated and nanowire size (DNW) dependency of various electrical characteristics has been investigated. Random telegraph noise (RTN) in SNWFET is studied as well.", "title": "" }, { "docid": "3013a8b320cbbfc1ac8fed7c06d6996f", "text": "Security and privacy are among the most pressing concerns that have evolved with the Internet. As networks expanded and became more open, security practices shifted to ensure protection of the ever growing Internet, its users, and data. Today, the Internet of Things (IoT) is emerging as a new type of network that connects everything to everyone, everywhere. Consequently, the margin of tolerance for security and privacy becomes narrower because a breach may lead to large-scale irreversible damage. One feature that helps alleviate the security concerns is authentication. While different authentication schemes are used in vertical network silos, a common identity and authentication scheme is needed to address the heterogeneity in IoT and to integrate the different protocols present in IoT. We propose in this paper an identity-based authentication scheme for heterogeneous IoT. The correctness of the proposed scheme is tested with the AVISPA tool and results showed that our scheme is immune to masquerade, man-in-the-middle, and replay attacks.", "title": "" }, { "docid": "3eebdb20316c225b839cd310dc173499", "text": "This paper proposes a planar embedded structure pick-up coil current sensor for integrated power electronic modules technology. It has compact size, excellent linearity, stability, noise immunity and wide bandwidth without adding significant losses or parasitics. Preliminary test results and discussions are presented in this paper.", "title": "" }, { "docid": "67a958a34084061e3bcd7964790879c4", "text": "Researchers spent lots of time in searching published articles relevant to their project. Though having similar interest in projects researches perform individual and time overwhelming searches. But researchers are unable to control the results obtained from earlier search process, whereas they can share the results afterwards. We propose a research paper recommender system by enhancing existing search engines with recommendations based on preceding searches performed by others researchers that avert time absorbing searches. Top-k query algorithm retrieves best answers from a potentially large record set so that we find the most accurate records from the given record set that matches the filtering keywords. KeywordsRecommendation System, Personalization, Profile, Top-k query, Steiner Tree", "title": "" }, { "docid": "de7d29c7e11445e836bd04c003443c67", "text": "Logistic regression with `1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale `1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.", "title": "" }, { "docid": "0b5431e668791d180239849c53faa7f7", "text": "Crowdfunding is quickly emerging as an alternative to traditional methods of funding new products. In a crowdfunding campaign, a seller solicits financial contributions from a crowd, usually in the form of pre-buying an unrealized product, and commits to producing the product if the total amount pledged is above a certain threshold. We provide a model of crowdfunding in which consumers arrive sequentially and make decisions about whether to pledge or not. Pledging is not costless, and hence consumers would prefer not to pledge if they think the campaign will not succeed. This can lead to cascades where a campaign fails to raise the required amount even though there are enough consumers who want the product. The paper introduces a novel stochastic process --- anticipating random walks --- to analyze this problem. The analysis helps explain why some campaigns fail and some do not, and provides guidelines about how sellers should design their campaigns in order to maximize their chances of success. More broadly, Anticipating Random Walks can also find application in settings where agents make decisions sequentially and these decisions are not just affected by past actions of others, but also by how they will impact the decisions of future actors as well.", "title": "" }, { "docid": "2d615aa63ff115a1e9d511456000c226", "text": "The face mask presentation attack introduces a greater threat to the face recognition system. With the evolving technology in generating both 2D and 3D masks in a more sophisticated, realistic and cost effective manner encloses the face recognition system to more challenging vulnerabilities. In this paper, we present a novel Presentation Attack Detection (PAD) scheme that explores both global (i.e. face) and local (i.e. periocular or eye) region to accurately identify the presence of both 2D and 3D face masks. The proposed PAD algorithm is based on both Binarized Statistical Image Features (BSIF) and Local Binary Patterns (LBP) that can capture a prominent micro-texture features. The linear Support Vector Machine (SVM) is then trained independently on these two features that are applied on both local and global region to obtain the comparison scores. We then combine these scores using the weighted sum rule before making the decision about a normal (or real or live) or an artefact (or spoof) face. Extensive experiments are carried out on two publicly available databases for 2D and 3D face masks namely: CASIA face spoof database and 3DMAD shows the efficacy of the proposed scheme when compared with well-established state-of-the-art techniques.", "title": "" }, { "docid": "aaba4377acbd22cbc52681d4d15bf9af", "text": "This paper presents a new human body communication (HBC) technique that employs magnetic resonance for data transfer in wireless body-area networks (BANs). Unlike electric field HBC (eHBC) links, which do not necessarily travel well through many biological tissues, the proposed magnetic HBC (mHBC) link easily travels through tissue, offering significantly reduced path loss and, as a result, reduced transceiver power consumption. In this paper the proposed mHBC concept is validated via finite element method simulations and measurements. It is demonstrated that path loss across the body under various postures varies from 10-20 dB, which is significantly lower than alternative BAN techniques.", "title": "" }, { "docid": "37148a1c4e16edeac5f8fb082ea3dc70", "text": "Familial aggregation and the effect of parenting styles on three dispositions toward ridicule and being laughed at were tested. Nearly 100 families (parents, their adult children, and their siblings) completed subjective questionnaires to assess the presence of gelotophobia (the fear of being laughed at), gelotophilia (the joy of being laughed at), and katagelasticism (the joy of laughing at others). A positive relationship between fear of being laughed at in children and their parents was found. Results for gelotophilia were similar but numerically lower; if split by gender of the adult child, correlations to the mother’s gelotophilia exceeded those of the father. Katagelasticism arose independently from the scores in the parents but was robustly related to greater katagelasticism in the children’s siblings. Gelotophobes remembered punishment (especially from the mother), lower warmth and higher control from their parents (this was also found in the parents’ recollections of their parenting style). The incidence of gelotophilia was unrelated to specific parenting styles, and katagelasticism exhibited only weak relations with punishment. The study suggests a specific pattern in the relation of the three dispositions within families and argues for a strong impact of parenting styles on gelotophobia but less so for gelotophilia and katagelasticism. DOI: https://doi.org/10.1080/17439760.2012.702784 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-63535 Accepted Version Originally published at: Harzer, Claudia; Ruch, Willibald (2012). When the job is a calling: the role of applying one’s signature strengths at work. The Journal of Positive Psychology, 7(5):362-371. DOI: https://doi.org/10.1080/17439760.2012.702784 This manuscript was published as: Harzer, C., & Ruch, W. (2012). When the job is a calling: The role of applying one’s signature strengths at work. Journal of Positive Psychology, 7, 362371. doi:10.1080/17439760.2012.702784 Running Head: WHEN THE JOB IS A CALLING 1 When the Job is a Calling: The Role of Applying One’s Signature Strengths at Work Claudia Harzer and Willibald Ruch Department of Psychology, University of Zurich, Zurich, Switzerland Claudia Harzer, Section on Personality and Assessment, Department of Psychology, University of Zurich, Binzmuehlestrasse 14/ Box 7, 8050 Zurich, Switzerland, E-mail: c.harzer@psychologie.uzh.ch, telephone: 0041 44 635 75 26, fax: 0041 44 635 75 29 Willibald Ruch, Section on Personality and Assessment, Department of Psychology, University of Zurich, Binzmuehlestrasse 14/ Box 7, 8050 Zurich, Switzerland, E-mail: w.ruch@psychologie.uzh.ch, telephone: 0041 44 635 75 20, fax: 0041 44 635 75 29 * Corresponding author. Email: c.harzer@psychologie.uzh.ch Running Head: WHEN THE JOB IS A CALLING 2 When the Job is a Calling: The Role of Applying One’s Signature Strengths at Work The present study investigates the role of applying the individual signature strengths at work for positive experiences at work (i.e., job satisfaction, pleasure, engagement, meaning) and calling. A sample of 111 employees from various occupations completed measures on character strengths, positive experiences at work, and calling. Co-workers (N = 111) rated the applicability of character strengths at work. Correlations between applicability of character strengths and positive experiences at work decreased with intra-individual centrality of strengths (ranked strengths from the highest to the lowest). Level of positive experiences and calling were higher when four to seven signature strengths were applied at work compared to less than four. Positive experiences partially mediated the effect of the number of applied signature strengths on calling. Implications for further research and practice will be discussed.", "title": "" }, { "docid": "0150caaaa121afdbf04dbf496d3770c3", "text": "The use of interactive technologies to aid in the implementation of smart cities has a significant potential to support disabled users in performing their activities as citizens. In this study, we present an investigation of the accessibility of a sample of 10 mobile Android™ applications of Brazilian municipalities, two from each of the five big geographical regions of the country, focusing especially on users with visual disabilities. The results showed that many of the applications were not in accordance with accessibility guidelines, with an average of 57 instances of violations and an average of 11.6 different criteria violated per application. The main problems included issues like not addressing labelling of non-textual content, headings, identifying user location, colour contrast, enabling users to interact using screen reader gestures, focus visibility and lack of adaptation of text contained in image. Although the growth in mobile applications for has boosted the possibilities aligned with the principles of smart cities, there is a strong need for including accessibility in the design of such applications in order for disabled people to benefit from the potential they can have for their lives.", "title": "" }, { "docid": "23d61c3396d49e223485baa1c66b8eab", "text": "Of the different branches of indoor localization research, WiFi fingerprinting has drawn significant attention over the past decade. These localization systems function by comparing WiFi received signal strength indicator (RSSI) and a pre-established location-specific fingerprint map. However, due to the time-variant wireless signal strength, the RSSI fingerprint map needs to be calibrated periodically, incurring high labor and time costs. In addition, biased RSSI measurements across devices along with transmission power control techniques of WiFi routers further undermine the fidelity of existing fingerprint-based localization systems. To remedy these problems, we propose GradIent FingerprinTing (GIFT) which leverages a more stable RSSI gradient. GIFT first builds a gradient-based fingerprint map (Gmap) by comparing absolute RSSI values at nearby positions, and then runs an online extended particle filter (EPF) to localize the user/device. By incorporating Gmap, GIFT is more adaptive to the time-variant RSSI in indoor environments, thus effectively reducing the overhead of fingerprint map calibration. We implemented GIFT on Android smartphones and tablets, and conducted extensive experiments in a five-story campus building. GIFT is shown to achieve an 80 percentile accuracy of 5.6 m with dynamic WiFi signals.", "title": "" }, { "docid": "94b8aeb8454b05a7916daf0f0b57ee8b", "text": "Accumulating evidence suggests that neuroinflammation affecting microglia plays an important role in the etiology of schizophrenia, and appropriate control of microglial activation may be a promising therapeutic strategy for schizophrenia. Minocycline, a second-generation tetracycline that inhibits microglial activation, has been shown to have a neuroprotective effect in various models of neurodegenerative disease, including anti-inflammatory, antioxidant, and antiapoptotic properties, and an ability to modulate glutamate-induced excitotoxicity. Given that these mechanisms overlap with neuropathologic pathways, minocycline may have a potential role in the adjuvant treatment of schizophrenia, and improve its negative symptoms. Here, we review the relevant studies of minocycline, ranging from preclinical research to human clinical trials.", "title": "" }, { "docid": "0d9affda4d9f7089d76a492676ab3f9e", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR' s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR' s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The American Political Science Review is published by American Political Science Association. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/joumals/apsa.html.", "title": "" } ]
scidocsrr
ba4fd858ae6198a47a0ea3ce1f079232
Extracting semantics from audio-visual content: the final frontier in multimedia retrieval
[ { "docid": "4070072c5bd650d1ca0daf3015236b31", "text": "Automated classiication of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the eeciency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identiication of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in video, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion, and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classiier built using these features is able to identify sports clips with an accuracy of about 93%.", "title": "" }, { "docid": "662b1ec9e2481df760c19567ce635739", "text": "Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they", "title": "" } ]
[ { "docid": "09e740b38d0232361c89f47fce6155b4", "text": "Nano-emulsions consist of fine oil-in-water dispersions, having droplets covering the size range of 100-600 nm. In the present work, nano-emulsions were prepared using the spontaneous emulsification mechanism which occurs when an organic phase and an aqueous phase are mixed. The organic phase is an homogeneous solution of oil, lipophilic surfactant and water-miscible solvent, the aqueous phase consists on hydrophilic surfactant and water. An experimental study of nano-emulsion process optimisation based on the required size distribution was performed in relation with the type of oil, surfactant and the water-miscible solvent. The results showed that the composition of the initial organic phase was of great importance for the spontaneous emulsification process, and so, for the physico-chemical properties of the obtained emulsions. First, oil viscosity and HLB surfactants were changed, alpha-tocopherol, the most viscous oil, gave the smallest droplets size (171 +/- 2 nm), HLB required for the resulting oil-in-water emulsion was superior to 8. Second, the effect of water-solvent miscibility on the emulsification process was studied by decreasing acetone proportion in the organic phase. The solvent-acetone proportion leading to a fine nano-emulsion was fixed at 15/85% (v/v) with EtAc-acetone and 30/70% (v/v) with MEK-acetone mixture. To strength the choice of solvents, physical characteristics were compared, in particular, the auto-inflammation temperature and the flash point. This phase of emulsion optimisation represents an important step in the process of polymeric nanocapsules preparation using nanoprecipitation or interfacial polycondensation combined with spontaneous emulsification technique.", "title": "" }, { "docid": "a95761b5a67a07d02547c542ddc7e677", "text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; RL9J@virginia.edu. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.", "title": "" }, { "docid": "170a1dba20901d88d7dc3988647e8a22", "text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.", "title": "" }, { "docid": "0778eff54b2f48c9ed4554c617b2dcab", "text": "The diagnosis of heart disease is a significant and tedious task in medicine. The healthcare industry gathers enormous amounts of heart disease data that regrettably, are not “mined” to determine concealed information for effective decision making by healthcare practitioners. The term Heart disease encompasses the diverse diseases that affect the heart. Cardiomyopathy and Cardiovascular disease are some categories of heart diseases. The reduction of blood and oxygen supply to the heart leads to heart disease. In this paper the data classification is based on supervised machine learning algorithms which result in accuracy, time taken to build the algorithm. Tanagra tool is used to classify the data and the data is evaluated using 10-fold cross validation and the results are compared.", "title": "" }, { "docid": "037dc2916e4356c11039e9520369ca3b", "text": "Surmounting terrain elevations, such as terraces, is useful to increase the reach of mobile robots operating in disaster areas, construction sites, and natural environments. This paper proposes an autonomous climbing maneuver for tracked mobile manipulators with the help of the onboard arm. The solution includes a fast 3-D scan processing method to estimate a simple set of geometric features for the ascent: three lines that correspond to the low and high edges, and the maximum inclination axis. Furthermore, terraces are classified depending on whether they are reachable through a slope or an abrupt step. In the proposed maneuver, the arm is employed both for shifting the center of gravity of the robot and as an extra limb that can be pushed against the ground. Feedback during climbing can be obtained through an inertial measurement unit, joint absolute encoders, and pressure sensors. Experimental results are presented for terraces of both kinds on rough terrain with the hydraulic mobile manipulator Alacrane.", "title": "" }, { "docid": "cfb1e7710233ca9a8e91885801326c20", "text": "During the last ten years technological development has reshaped the banking industry, which has become one of the leading sectors in utilizing new technology on consumer markets. Today, mobile communication technologies offer vast additional value for consumers’ banking transactions due to their always-on functionality and the option to access banks anytime and anywhere. Various alternative approaches have used in analyzing customer’s acceptance of new technologies. In this paper, factors affect acceptance of Mobile Banking are explored and presented as a New Model.", "title": "" }, { "docid": "a0c37bb6608f51f7095d6e5392f3c2f9", "text": "The main study objective was to develop robust processing and analysis techniques to facilitate the use of small-footprint lidar data for estimating plot-level tree height by measuring individual trees identifiable on the three-dimensional lidar surface. Lidar processing techniques included data fusion with multispectral optical data and local filtering with both square and circular windows of variable size. The lidar system used for this study produced an average footprint of 0.65 m and an average distance between laser shots of 0.7 m. The lidar data set was acquired over deciduous and coniferous stands with settings typical of the southeastern United States. The lidar-derived tree measurements were used with regression models and cross-validation to estimate tree height on 0.017-ha plots. For the pine plots, lidar measurements explained 97 percent of the variance associated with the mean height of dominant trees. For deciduous plots, regression models explained 79 percent of the mean height variance for dominant trees. Filtering for local maximum with circular windows gave better fitting models for pines, while for deciduous trees, filtering with square windows provided a slightly better model fit. Using lidar and optical data fusion to differentiate between forest types provided better results for estimating average plot height for pines. Estimating tree height for deciduous plots gave superior results without calibrating the search window size based on forest type. Introduction Laser scanner systems currently available have experienced a remarkable evolution, driven by advances in the remote sensing and surveying industry. Lidar sensors offer impressive performance that challange physical barriers in the optical and electronic domain by offering a high density of points at scanning frequencies of 50,000 pulses/second, multiple echoes per laser pulse, intensity measurements for the returning signal, and centimeter accuracy for horizontal and vertical positioning. Given a high density of points, processing algorithms can identify single trees or groups of trees in order to extract various measurements on their three-dimensional representation (e.g., Hyyppä and Inkinen, 2002). Seeing the Trees in the Forest: Using Lidar and Multispectral Data Fusion with Local Filtering and Variable Window Size for Estimating Tree Height Sorin C. Popescu and Randolph H. Wynne The foundations of lidar forest measurements lie with the photogrammetric techniques developed to assess tree height, volume, and biomass. Lidar characteristics, such as high sampling intensity, extensive areal coverage, ability to penetrate beneath the top layer of the canopy, precise geolocation, and accurate ranging measurements, make airborne laser systems useful for directly assessing vegetation characteristics. Early lidar studies had been used to estimate forest vegetation characteristics, such as percent canopy cover, biomass (Nelson et al., 1984; Nelson et al., 1988a; Nelson et al., 1988b; Nelson et al., 1997), and gross-merchantable timber volume (Maclean and Krabill, 1986). Research efforts investigated the estimation of forest stand characteristics with scanning lasers that provided lidar data with either relatively large laser footprints, i.e., 5 to 25 m (Harding et al., 1994; Lefsky et al., 1997; Weishampel et al., 1997; Blair et al., 1999; Lefsky et al., 1999; Means et al., 1999) or small footprints, but with only one laser return (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Hyyppä et al., 2001). A small-footprint lidar with the potential to record the entire time-varying distribution of returned pulse energy or waveform was used by Nilsson (1996) for measuring tree heights and stand volume. As more systems operate with high performance, research efforts for forestry applications of lidar have become very intense and resulted in a series of studies that proved that lidar technology is well suited for providing estimates of forest biophysical parameters. Needs for timely and accurate estimates of forest biophysical parameters have arisen in response to increased demands on forest inventory and analysis. The height of a forest stand is a crucial forest inventory attribute for calculating timber volume, site potential, and silvicultural treatment scheduling. Measuring of stand height by current manual photogrammetric or field survey techniques is time consuming and rather expensive. Tree heights have been derived from scanning lidar data sets and have been compared with ground-based canopy height measurements (Næsset, 1997a; Næsset, 1997b; Magnussen and Boudewyn, 1998; Magnussen et al., 1999; Næsset and Bjerknes, 2001; Næsset and Økland, 2002; Persson et al., 2002; Popescu, 2002; Popescu et al., 2002; Holmgren et al., 2003; McCombs et al., 2003). Despite the intense research efforts, practical applications of P H OTO G R A M M E T R I C E N G I N E E R I N G & R E M OT E S E N S I N G May 2004 5 8 9 Department of Forestry, Virginia Tech, 319 Cheatham Hall (0324), Blacksburg, VA 24061 (wynne@vt.edu). S.C. Popescu is presently with the Spatial Sciences Laboratory, Department of Forest Science, Texas A&M University, 1500 Research Parkway, Suite B223, College Station, TX 778452120 (s-popescu@tamu.edu). Photogrammetric Engineering & Remote Sensing Vol. 70, No. 5, May 2004, pp. 589–604. 0099-1112/04/7005–0589/$3.00/0 © 2004 American Society for Photogrammetry and Remote Sensing 02-099.qxd 4/5/04 10:44 PM Page 589", "title": "" }, { "docid": "109c5caa55d785f9f186958f58746882", "text": "Apriori and Eclat are the best-known basic algorithms for mining frequent item sets in a set of transactions. In this paper I describe implementations of these two algorithms that use several optimizations to achieve maximum performance, w.r.t. both execution time and memory usage. The Apriori implementation is based on a prefix tree representation of the needed counters and uses a doubly recursive scheme to count the transactions. The Eclat implementation uses (sparse) bit matrices to represent transactions lists and to filter closed and maximal item sets.", "title": "" }, { "docid": "4f9b168efee2348f0f02f2480f9f449f", "text": "Transcutaneous neuromuscular electrical stimulation applied in clinical settings is currently characterized by a wide heterogeneity of stimulation protocols and modalities. Practitioners usually refer to anatomic charts (often provided with the user manuals of commercially available stimulators) for electrode positioning, which may lead to inconsistent outcomes, poor tolerance by the patients, and adverse reactions. Recent evidence has highlighted the crucial importance of stimulating over the muscle motor points to improve the effectiveness of neuromuscular electrical stimulation. Nevertheless, the correct electrophysiological definition of muscle motor point and its practical significance are not always fully comprehended by therapists and researchers in the field. The commentary describes a straightforward and quick electrophysiological procedure for muscle motor point identification. It consists in muscle surface mapping by using a stimulation pen-electrode and it is aimed at identifying the skin area above the muscle where the motor threshold is the lowest for a given electrical input, that is the skin area most responsive to electrical stimulation. After the motor point mapping procedure, a proper placement of the stimulation electrode(s) allows neuromuscular electrical stimulation to maximize the evoked tension, while minimizing the dose of the injected current and the level of discomfort. If routinely applied, we expect this procedure to improve both stimulation effectiveness and patient adherence to the treatment. The aims of this clinical commentary are to present an optimized procedure for the application of neuromuscular electrical stimulation and to highlight the clinical implications related to its use.", "title": "" }, { "docid": "619e3893a731ffd0ed78c9dd386a1dff", "text": "The introduction of new gesture interfaces has been expanding the possibilities of creating new Digital Musical Instruments (DMIs). Leap Motion Controller was recently launched promising fine-grained hand sensor capabilities. This paper proposes a preliminary study and evaluation of this new sensor for building new DMIs. Here, we list a series of gestures, recognized by the device, which could be theoretically used for playing a large number of musical instruments. Then, we present an analysis of precision and latency of these gestures as well as a first case study integrating Leap Motion with a virtual music keyboard.", "title": "" }, { "docid": "df0756ecff9f2ba84d6db342ee6574d3", "text": "Security is becoming a critical part of organizational information systems. Intrusion detection system (IDS) is an important detection that is used as a countermeasure to preserve data integrity and system availability from attacks. Data mining is being used to clean, classify, and examine large amount of network data to correlate common infringement for intrusion detection. The main reason for using data mining techniques for intrusion detection systems is due to the enormous volume of existing and newly appearing network data that require processing. The amount of data accumulated each day by a network is huge. Several data mining techniques such as clustering, classification, and association rules are proving to be useful for gathering different knowledge for intrusion detection. This paper presents the idea of applying data mining techniques to intrusion detection systems to maximize the effectiveness in identifying attacks, thereby helping the users to construct more secure information systems.", "title": "" }, { "docid": "058db5e1a8c58a9dc4b68f6f16847abc", "text": "Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bagof-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "05ab4fa15696ee8b47e017ebbbc83f2c", "text": "Vertically aligned rutile TiO2 nanowire arrays (NWAs) with lengths of ∼44 μm have been successfully synthesized on transparent, conductive fluorine-doped tin oxide (FTO) glass by a facile one-step solvothermal method. The length and wire-to-wire distance of NWAs can be controlled by adjusting the ethanol content in the reaction solution. By employing optimized rutile TiO2 NWAs for dye-sensitized solar cells (DSCs), a remarkable power conversion efficiency (PCE) of 8.9% is achieved. Moreover, in combination with a light-scattering layer, the performance of a rutile TiO2 NWAs based DSC can be further enhanced, reaching an impressive PCE of 9.6%, which is the highest efficiency for rutile TiO2 NWA based DSCs so far.", "title": "" }, { "docid": "0ccbc8579a1d6e39c92f8a7acea979bd", "text": "In mental health, the term ‘recovery’ is commonly used to refer to the lived experience of the person coming to terms with, and overcoming the challenges associated with, having a mental illness (Shepherd et al 2008). The term ‘recovery’ has evolved as having a special meaning for mental health service users (Andresen et al 2003) and consistently refers to their personal experiences and expectations for recovery (Slade et al 2008). On the other hand, mental health service providers often refer to a ‘recovery’ framework in order to promote their service (Meehan et al 2008). However, practitioners lean towards a different meaning-in-use, which is better described as ‘clinical recovery’ and is measured routinely in terms of symptom profiles, health service utilisation, health outcomes and global assessments of functioning. These very different meanings-in-use of the same term have the potential to cause considerable confusion to readers of the mental health literature. Researchers have recently identified an urgent need to clarify the recovery concept so that a common meaning can be established and the construct can be defined operationally (Meehan et al 2008, Slade et al 2008). This paper aims to delineate a construct of recovery that can be applied operationally and consistently in mental health. The criteria were twofold: 1. The dimensions need to have a parsimonious and near mutually exclusive internal structure 2. All stakeholder perspectives and interests, including those of the wider community, need to be accommodated. With these criteria in mind, the literature was revisited to identify possible domains. It was subsequently identified that the recovery literature can be reclassified into components that accommodate the views of service users, practitioners, rehabilitation providers, family and carers, and the wider community. The recovery dimensions identified were clinical recovery, personal recovery, social recovery and functional recovery. Recovery as a concept has gained increased attention in the field of mental health. There is an expectation that service providers use a recovery framework in their work. This raises the question of what recovery means, and how it is conceptualised and operationalised. It is proposed that service providers approach the application of recovery principles by considering systematically individual recovery goals in multiple domains, encompassing clinical recovery, personal recovery, social recovery and functional recovery. This approach enables practitioners to focus on service users’ personal recovery goals while considering parallel goals in the clinical, social, and role-functioning domains. Practitioners can reconceptualise recovery as involving more than symptom remission, and interventions can be tailored to aspects of recovery of importance to service users. In order to accomplish this shift, practitioners will require effective assessments, access to optimal treatment and care, and the capacity to conduct recovery planning in collaboration with service users and their families and carers. Mental health managers can help by fostering an organisational culture of service provision that supports a broader focus than that on clinical recovery alone, extending to client-centred recovery planning in multiple recovery domains.", "title": "" }, { "docid": "16a6c26d6e185be8383c062c6aa620f8", "text": "In this research, we suggested a vision-based traffic accident detection system for automatically detecting, recording, and reporting traffic accidents at intersections. This model first extracts the vehicles from the video image of CCD camera, tracks the moving vehicles, and extracts features such as the variation rate of the velocity, position, area, and direction of moving vehicles. The model then makes decisions on the traffic accident based on the extracted features. And we suggested and designed the metadata registry for the system to improve the interoperability. In the field test, 4 traffic accidents were detected and recorded by the system. The video clips are invaluable for intersection safety analysis.", "title": "" }, { "docid": "74ce3b76d697d59df0c5d3f84719abb8", "text": "Existing Byzantine fault tolerance (BFT) protocols face significant challenges in the consortium blockchain scenario. On the one hand, we can make little assumptions about the reliability and security of the underlying Internet. On the other hand, the applications on consortium blockchains demand a system as scalable as the Bitcoin but providing much higher performance, as well as provable safety. We present a new BFT protocol, Gosig, that combines crypto-based secret leader selection and multi-round voting in the protocol layer with implementation layer optimizations such as gossip-based message propagation. In particular, Gosig guarantees safety even in a network fully controlled by adversaries, while providing provable liveness with easy-to-achieve network connectivity assumption. On a wide area testbed consisting of 140 Amazon EC2 servers spanning 14 cities on five continents, we show that Gosig can achieve over 4,000 transactions per second with less than 1 minute transaction confirmation time.", "title": "" }, { "docid": "9c3218ce94172fd534e2a70224ee564f", "text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.", "title": "" }, { "docid": "207bb3922ad45daa1023b70e1a18baf7", "text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.", "title": "" }, { "docid": "d80fc668073878c476bdf3997b108978", "text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system", "title": "" } ]
scidocsrr
9700d880ea946726f8aa8a0afe0f63d8
Wearable Monitoring Unit for Swimming Performance Analysis
[ { "docid": "8717a6e3c20164981131997efbe08a0d", "text": "The recent maturity of body sensor networks has enabled a wide range of applications in sports, well-being and healthcare. In this paper, we hypothesise that a single unobtrusive head-worn inertial sensor can be used to infer certain biomotion details of specific swimming techniques. The sensor, weighing only seven grams is mounted on the swimmer's goggles, limiting the disturbance to a minimum. Features extracted from the recorded acceleration such as the pitch and roll angles allow to recognise the type of stroke, as well as basic biomotion indices. The system proposed represents a non-intrusive, practical deployment of wearable sensors for swimming performance monitoring.", "title": "" }, { "docid": "4122375a509bf06cc7e8b89cb30357ff", "text": "Textile-based sensors offer an unobtrusive method of continually monitoring physiological parameters during daily activities. Chemical analysis of body fluids, noninvasively, is a novel and exciting area of personalized wearable healthcare systems. BIOTEX was an EU-funded project that aimed to develop textile sensors to measure physiological parameters and the chemical composition of body fluids, with a particular interest in sweat. A wearable sensing system has been developed that integrates a textile-based fluid handling system for sample collection and transport with a number of sensors including sodium, conductivity, and pH sensors. Sensors for sweat rate, ECG, respiration, and blood oxygenation were also developed. For the first time, it has been possible to monitor a number of physiological parameters together with sweat composition in real time. This has been carried out via a network of wearable sensors distributed around the body of a subject user. This has huge implications for the field of sports and human performance and opens a whole new field of research in the clinical setting.", "title": "" } ]
[ { "docid": "0886c323b86b4fac8de6217583841318", "text": "Data Mining is a technique used in various domains to give meaning to the available data Classification is a data mining (machine learning) technique used to predict group membership for data instances. In this paper, we present the basic classification techniques. Several major kinds of classification method including decision tree, Bayesian networks, k-nearest neighbour classifier, Neural Network, Support vector machine. The goal of this paper is to provide a review of different classification techniques in data mining. Keywords— Data mining, classification, Supper vector machine (SVM), K-nearest neighbour (KNN), Decision Tree.", "title": "" }, { "docid": "c112b88b7a5762050a54a15d066336b0", "text": "Before 2005, data broker ChoicePoint suffered fraudulent access to its databases that exposed thousands of customers' personal information. We examine Choice-Point's data breach, explore what went wrong from the perspective of consumers, executives, policy, and IT systems, and offer recommendations for the future.", "title": "" }, { "docid": "2923ea4e17567b06b9d8e0e9f1650e55", "text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.", "title": "" }, { "docid": "24174e59a5550fbf733c1a93f1519cf7", "text": "Using social practice theory, this article reveals the process of collective value creation within brand communities. Moving beyond a single case study, the authors examine previously published research in conjunction with data collected in nine brand communities comprising a variety of product categories, and they identify a common set of value-creating practices. Practices have an “anatomy” consisting of (1) general procedural understandings and rules (explicit, discursive knowledge); (2) skills, abilities, and culturally appropriate consumption projects (tacit, embedded knowledge or how-to); and (3) emotional commitments expressed through actions and representations. The authors find that there are 12 common practices across brand communities, organized by four thematic aggregates, through which consumers realize value beyond that which the firm creates or anticipates. They also find that practices have a physiology, interact with one another, function like apprenticeships, endow participants with cultural capital, produce a repertoire for insider sharing, generate consumption opportunities, evince brand community vitality, and create value. Theoretical and managerial implications are offered with specific suggestions for building and nurturing brand community and enhancing collaborative value creation between and among consumers and firms.", "title": "" }, { "docid": "114affaf4e25819aafa1c11da26b931f", "text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.", "title": "" }, { "docid": "44a8b574a892bff722618d256aa4ba6c", "text": "In this article, we investigate the cross-media retrieval between images and text, that is, using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based on the 4,096-dimensional convolutional neural network (CNN) visual feature and 100-dimensional Latent Dirichlet Allocation (LDA) textual feature, the mAP of the proposed method achieves the mAP score of 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.", "title": "" }, { "docid": "8ea0ac6401d648e359fc06efa59658e6", "text": "Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency.", "title": "" }, { "docid": "474986186c068f8872f763288b0cabd7", "text": "Mobile ad hoc network researchers face the challenge of achieving full functionality with good performance while linking the new technology to the rest of the Internet. A strict layered design is not flexible enough to cope with the dynamics of manet environments, however, and will prevent performance optimizations. The MobileMan cross-layer architecture offers an alternative to the pure layered approach that promotes stricter local interaction among protocols in a manet node.", "title": "" }, { "docid": "c05f2a6df3d58c5a18e0087556c8067e", "text": "Child maltreatment is a major social problem. This paper focuses on measuring the relationship between child maltreatment and crime using data from the National Longitudinal Study of Adolescent Health (Add Health). We focus on crime because it is one of the most costly potential outcomes of maltreatment. Our work addresses two main limitations of the existing literature on child maltreatment. First, we use a large national sample, and investigate different types of maltreatment in a unified framework. Second, we pay careful attention to controlling for possible confounders using a variety of statistical methods that make differing assumptions. The results suggest that maltreatment greatly increases the probability of engaging in crime and that the probability increases with the experience of multiple forms of maltreatment.", "title": "" }, { "docid": "c9df206d8c0bc671f3109c1c7b12b149", "text": "Internet of Things (IoT) — a unified network of physical objects that can change the parameters of the environment or their own, gather information and transmit it to other devices. It is emerging as the third wave in the development of the internet. This technology will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. The IoT is enabled by the latest developments, smart sensors, communication technologies, and Internet protocols. This article contains a description of lnternet of things (IoT) networks. Much attention is given to prospects for future of using IoT and it's development. Some problems of development IoT are were noted. The article also gives valuable information on building(construction) IoT systems based on PLC technology.", "title": "" }, { "docid": "638336dba1dd589b0f708a9426483827", "text": "Girard's linear logic can be used to model programming languages in which each bound variable name has exactly one \"occurrence\"---i.e., no variable can have implicit \"fan-out\"; multiple uses require explicit duplication. Among other nice properties, \"linear\" languages need no garbage collector, yet have no dangling reference problems. We show a natural equivalence between a \"linear\" programming language and a stack machine in which the top items can undergo arbitrary permutations. Such permutation stack machines can be considered combinator abstractions of Moore's Forth programming language.", "title": "" }, { "docid": "28552dfe20642145afa9f9fa00218e8e", "text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.", "title": "" }, { "docid": "2271347e3b04eb5a73466aecbac4e849", "text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method", "title": "" }, { "docid": "c28dc261ddc770a6655eb1dbc528dd3b", "text": "Software applications are no longer stand-alone systems. They are increasingly the result of integrating heterogeneous collections of components, both executable and data, possibly dispersed over a computer network. Different components can be provided by different producers and they can be part of different systems at the same time. Moreover, components can change rapidly and independently, making it difficult to manage the whole system in a consistent way. Under these circumstances, a crucial step of the software life cycle is deployment—that is, the activities related to the release, installation, activation, deactivation, update, and removal of components, as well as whole systems. This paper presents a framework for characterizing technologies that are intended to support software deployment. The framework highlights four primary factors concerning the technologies: process coverage; process changeability; interprocess coordination; and site, product, and deployment policy abstraction. A variety of existing technologies are surveyed and assessed against the framework. Finally, we discuss promising research directions in software deployment. This work was supported in part by the Air Force Material Command, Rome Laboratory, and the Defense Advanced Research Projects Agency under Contract Number F30602-94-C-0253. The content of the information does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.", "title": "" }, { "docid": "ff002c483d22b4d961bbd2f1a18231fd", "text": "Dogs can be grouped into two distinct types of breed based on the predisposition to chondrodystrophy, namely, non-chondrodystrophic (NCD) and chondrodystrophic (CD). In addition to a different process of endochondral ossification, NCD and CD breeds have different characteristics of intravertebral disc (IVD) degeneration and IVD degenerative diseases. The anatomy, physiology, histopathology, and biochemical and biomechanical characteristics of the healthy and degenerated IVD are discussed in the first part of this two-part review. This second part describes the similarities and differences in the histopathological and biochemical characteristics of IVD degeneration in CD and NCD canine breeds and discusses relevant aetiological factors of IVD degeneration.", "title": "" }, { "docid": "58de521ab563333c2051b590592501a8", "text": "Prognostics and systems health management (PHM) is an enabling discipline that uses sensors to assess the health of systems, diagnoses anomalous behavior, and predicts the remaining useful performance over the life of the asset. The advent of the Internet of Things (IoT) enables PHM to be applied to all types of assets across all sectors, thereby creating a paradigm shift that is opening up significant new business opportunities. This paper introduces the concepts of PHM and discusses the opportunities provided by the IoT. Developments are illustrated with examples of innovations from manufacturing, consumer products, and infrastructure. From this review, a number of challenges that result from the rapid adoption of IoT-based PHM are identified. These include appropriate analytics, security, IoT platforms, sensor energy harvesting, IoT business models, and licensing approaches.", "title": "" }, { "docid": "011a9ac960aecc4a91968198ac6ded97", "text": "INTRODUCTION\nPsychological empowerment is really important and has remarkable effect on different organizational variables such as job satisfaction, organizational commitment, productivity, etc. So the aim of this study was to investigate the relationship between psychological empowerment and productivity of Librarians in Isfahan Medical University.\n\n\nMETHODS\nThis was correlational research. Data were collected through two questionnaires. Psychological empowerment questionnaire and the manpower productivity questionnaire of Gold. Smith Hersey which their content validity was confirmed by experts and their reliability was obtained by using Cronbach's Alpha coefficient, 0.89 and 0.9 respectively. Due to limited statistical population, did not used sampling and review was taken via census. So 76 number of librarians were evaluated. Information were reported on both descriptive and inferential statistics (correlation coefficient tests Pearson, Spearman, T-test, ANOVA), and analyzed by using the SPSS19 software.\n\n\nFINDINGS\nIn our study, the trust between partners and efficacy with productivity had the highest correlation. Also there was a direct relationship between psychological empowerment and the productivity of labor (r =0.204). In other words, with rising of mean score of psychological empowerment, the mean score of efficiency increase too.\n\n\nCONCLUSIONS\nThe results showed that if development programs of librarian's psychological empowerment increase in order to their productivity, librarians carry out their duties with better sense. Also with using the capabilities of librarians, the development of creativity with happen and organizational productivity will increase.", "title": "" }, { "docid": "a5090b67307b2efa1f8ae7d6a212a6ff", "text": "Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "78966bb154649f9f4abb87bd5f29b230", "text": "The objective of a news veracity detection system is to identify various types of potentially misleading or false information, typically in a digital platform. A critical challenge in this scenario is that there are large volumes of data available online. However, obtaining samples with annotations (i.e. ground-truth labels) is difficult and a known limiting factor for many data analytic tasks including the current problem of news veracity detection. In this paper, we propose a human-machine collaborative learning system to evaluate the veracity of a news content, with a limited amount of annotated data samples. In a semi-supervised scenario, an initial classifier is learnt on a small, limited amount of the annotated data followed by an interactive approach to gradually update the model by shortlisting only relevant samples from the large pool of unlabeled data that are most likely to improve the classifier performance. Our prioritized active learning solution achieves faster convergence in terms of the classification performance, while requiring about 1–2 orders of magnitude fewer annotated samples compared to fully supervised solutions to attain a reasonably acceptable accuracy of nearly 80%. Unlike traditional deep learning architecture, the proposed active learning based deep model designed with a smaller number of more localized filters per layer can efficiently learn from small relevant sample batches that can effectively improve performance in the weakly-supervised learning environment and thus is more suitable for several practical applications. An effective dynamic domain adaptive feature weighting scheme can adjust the relative importance of feature dimensions iteratively. Insightful initial feedback gathered from two independent learning modules (a NLP shallow feature based classifier and a deep classifier), modeled to capture complementary information about data characteristics are finally fused together to achieve an impressive 25% average gain in the detection performance.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
58809bd46bc8f4656fa7a1c4495936fc
Designing of ORBAC Model For Secure Domain Environments
[ { "docid": "8f7428569e1d3036cdf4842d48b56c22", "text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.", "title": "" } ]
[ { "docid": "6992762ad22f9e33db6ded9430e06848", "text": "Solution M and C are strictly dominated and hence cannot receive positive probability in any Nash equilibrium. Given that only L and R receive positive probability, T cannot receive positive probability either. So, in any Nash equilibrium player 1 must play B with probability one. Given that, any probability distribution over L and R is a best response for player 2. In other words, the set of Nash equilibria is given by", "title": "" }, { "docid": "0dd43aa274838165077dc766ecdf3d83", "text": "Seeds play essential roles in plant life cycle and germination is a complex process which is associated with different phases of water imbibition. Upon imbibition, seeds begin utilization of storage substances coupled with metabolic activity and biosynthesis of new proteins. Regeneration of organelles and emergence of radicals lead to the establishment of seedlings. All these activities are regulated in coordinated manners. Translation is the requirement of germination of seeds via involvements of several proteins like beta-amylase, starch phosphorylase. Some important proteins involved in seed germination are discussed in this review. In the past decade, several proteomic studies regarding seed germination of various species such as rice, Arabidopsis have been conducted. We face A paucity of proteomic data with respect to woody plants e.g. Fagus, Pheonix etc. With particular reference to Cyclobalnopsis gilva, a woody plant having low seed germination rate, no proteomic studies have been conducted. The review aims to reveal the complex seed germination mechanisms from woody and herbaceous plants that will help in understanding different seed germination phases and the involved proteins in C. gilva.", "title": "" }, { "docid": "4b930300b13c954ad8a158517ebb8109", "text": "Under partial shading conditions, multiple peaks are observed in the power-voltage (P- V) characteristic curve of a photovoltaic (PV) array, and the conventional maximum power point tracking (MPPT) algorithms may fail to track the global maximum power point (GMPP). Therefore, this paper proposes a modified incremental conductance (Inc Cond) algorithm that is able to track the GMPP under partial shading conditions and load variation. A novel algorithm is introduced to modulate the duty cycle of the dc-dc converter in order to ensure fast MPPT process. Simulation and hardware implementation are carried out to evaluate the effectiveness of the proposed algorithm under partial shading and load variation. The results show that the proposed algorithm is able to track the GMPP accurately under different types of partial shading conditions, and the response during variation of load and solar irradiation are faster than the conventional Inc Cond algorithm. Hence, the effectiveness of the proposed algorithm under partial shading condition and load variation is validated in this paper.", "title": "" }, { "docid": "65dfecb5e0f4f658a19cd87fb94ff0ae", "text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.", "title": "" }, { "docid": "f835e60133415e3ec53c2c9490048172", "text": "Probabilistic databases have received considerable attention recently due to the need for storing uncertain data produced by many real world applications. The widespread use of probabilistic databases is hampered by two limitations: (1) current probabilistic databases make simplistic assumptions about the data (e.g., complete independence among tuples) that make it difficult to use them in applications that naturally produce correlated data, and (2) most probabilistic databases can only answer a restricted subset of the queries that can be expressed using traditional query languages. We address both these limitations by proposing a framework that can represent not only probabilistic tuples, but also correlations that may be present among them. Our proposed framework naturally lends itself to the possible world semantics thus preserving the precise query semantics extant in current probabilistic databases. We develop an efficient strategy for query evaluation over such probabilistic databases by casting the query processing problem as an inference problem in an appropriately constructed probabilistic graphical model. We present several optimizations specific to probabilistic databases that enable efficient query evaluation. We validate our approach by presenting an experimental evaluation that illustrates the effectiveness of our techniques at answering various queries using real and synthetic datasets.", "title": "" }, { "docid": "24a4fb7f87d6ee75aa26aeb6b77f68bb", "text": "Networked learning is much more ambitious than previous approaches of ICT-support in education. It is therefore more difficult to evaluate the effectiveness and efficiency of the networked learning activities. Evaluation of learners’ interactions in networked learning environments is a difficult, resource and expertise demanding task. Educators participating in online learning environments, have very little support by integrated tools to evaluate students’ activities and identify learners’ online browsing behavior and interactions. As a consequence, educators are in need for non-intrusive and automatic ways to get feedback from learners’ progress in order to better follow their learning process and appraise the online course effectiveness. They also need specialized tools for authoring, delivering, gathering and analysing data for evaluating the learning effectiveness of networked learning courses. Thus, the aim of this paper is to propose a new set of services for the evaluator and lecturer so that he/she can easily evaluate the learners’ progress and produce evaluation reports based on learners’ behaviour within a Learning Management System. These services allow the evaluator to easily track down the learners’ online behavior at specific milestones set up, gather feedback in an automatic way and present them in a comprehensive way. The innovation of the proposed set of services lies on the effort to adopt/adapt some of the web usage mining techniques combining them with the use of semantic description of networked learning tasks", "title": "" }, { "docid": "2a1bee8632e983ca683cd5a9abc63343", "text": "Phrase browsing techniques use phrases extracted automatically from a large information collection as a basis for browsing and accessing it. This paper describes a case study that uses an automatically constructed phrase hierarchy to facilitate browsing of an ordinary large Web site. Phrases are extracted from the full text using a novel combination of rudimentary syntactic processing and sequential grammar induction techniques. The interface is simple, robust and easy to use.\nTo convey a feeling for the quality of the phrases that are generated automatically, a thesaurus used by the organization responsible for the Web site is studied and its degree of overlap with the phrases in the hierarchy is analyzed. Our ultimate goal is to amalgamate hierarchical phrase browsing and hierarchical thesaurus browsing: the latter provides an authoritative domain vocabulary and the former augments coverage in areas the thesaurus does not reach.", "title": "" }, { "docid": "b2895d35c6ffddfb9adc7c1d88cef793", "text": "We develop algorithms for a stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. Scheduling surgeries in an operating room motivates the work. The problem is formulated as an integer stochastic program using sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing based on real data shows that the proposed methods produce good results compared to previous approaches. In addition we prove that the finite scenario sample average approximation problem is NP-complete.", "title": "" }, { "docid": "b6daaad245ea5a6f8bf0c6280a80705c", "text": "Human, Homo sapiens, female orgasm is not necessary for conception; hence it seems reasonable to hypothesize that orgasm is an adaptation for manipulating the outcome of sperm competition resulting from facultative polyandry. If heritable differences in male viability existed in the evolutionary past, selection could have favoured female adaptations (e.g. orgasm) that biased sperm competition in favour of males possessing heritable fitness indicators. Accumulating evidence suggests that low fluctuating asymmetry is a sexually selected male feature in a variety of species, including humans, possibly because it is a marker of genetic quality. Based on these notions, the proportion of a woman’s copulations associated with orgasm is predicted to be associated with her partner’s fluctuating asymmetry. A questionnaire study of 86 sexually active heterosexual couples supported this prediction. Women with partners possessing low fluctuating asymmetry and their partners reported significantly more copulatory female orgasms than were reported by women with partners possessing high fluctuating asymmetry and their partners, even with many potential confounding variables controlled. The findings are used to examine hypotheses for female orgasm other than selective sperm retention. i 1995 The Association for the Study of Animal Behaviour The human female orgasm has attracted great interest from many evolutionary behavioy-al scientists. Several hypotheses propose that female orgasm is an adaptation. First, human female orgasm has been claimed to create and maintain the pair bond between male and female by promoting female intimacy through sexual pleasure (e.g. Morris 1967; Eibl-Eibesfeldt 1989). Second, a number of evolutionists have suggested that human female orgasm functions in selective bonding with males by promoting affiliation primarily with males who are willing to invest time or material resources in the female (Alexander 1979; Alcock 1987) and/or males of genotypic quality (Smith 1984; Alcock 1987). Third, female orgasm has been said to motivate a female to pursue multiple males to prevent male infanticide of the female’s offspring and/or to gain material benefits from multiple mates (Hrdy 1981). Fourth, Morris (1967) proposed that human female orgasm functions to induce fatigue, sleep and a prone position, and thereby passively acts to retain sperm. Correspondence: R. Thornhill, Department of Biology, University of New Mexico, Albuquerque, NM 87131. 1091, U.S.A. (email: rthorn@unm.edu). Additional adaptational hypotheses suggest a more active process by which orgasm retains sperm. The ‘upsuck’ hypothesis proposes that orgasm actively retains sperm by sucking sperm into the uterus (Fox et al. 1970; see also Singer 1973). Smith (1984) modified this hypothesis into one based on sire choice; he argued that the evolved function of female orgasm is control over paternity of offspring by assisting the sperm of preferred sires and handicapping the sperm of non-preferred mates. Also, Baker & Bellis (1993; see also Baker et al. 1989) speculated that timing of the human female orgasm plays a role in sperm retention. Baker & Bellis (1993) showed that orgasm occurring near the time of male ejaculation results in greater sperm retention, as does orgasm up to 45 min. after ejaculation. Orgasm occurring more than a minute before male ejaculation appears not to enhance sperm retention. Baker & Bellis (1993) furthermore argued that orgasms occurring at one time may hinder retention of sperm from subsequent copulations up to 8 days later. In addition, a number of theorists have argued that human female orgasm has not been selected for because of its own functional significance and 0003-3472/95/121601+ 15 $12.00/O D 1995 The Association for the Study of Animal Behaviour", "title": "" }, { "docid": "e2de032eac6b4a8f6c816d6eb85b41ef", "text": "Terrestrial habitats surrounding wetlands are critical to the management of natural resources. Although the protection of water resources from human activities such as agriculture, silviculture, and urban development is obvious, it is also apparent that terrestrial areas surrounding wetlands are core habitats for many semiaquatic species that depend on mesic ecotones to complete their life cycle. For purposes of conservation and management, it is important to define core habitats used by local breeding populations surrounding wetlands. Our objective was to provide an estimate of the biologically relevant size of core habitats surrounding wetlands for amphibians and reptiles. We summarize data from the literature on the use of terrestrial habitats by amphibians and reptiles associated with wetlands (19 frog and 13 salamander species representing 1363 individuals; 5 snake and 28 turtle species representing more than 2245 individuals). Core terrestrial habitat ranged from 159 to 290 m for amphibians and from 127 to 289 m for reptiles from the edge of the aquatic site. Data from these studies also indicated the importance of terrestrial habitats for feeding, overwintering, and nesting, and, thus, the biological interdependence between aquatic and terrestrial habitats that is essential for the persistence of populations. The minimum and maximum values for core habitats, depending on the level of protection needed, can be used to set biologically meaningful buffers for wetland and riparian habitats. These results indicate that large areas of terrestrial habitat surrounding wetlands are critical for maintaining biodiversity. Criterios Biológicos para Zonas de Amortiguamiento Alrededor de Hábitats de Humedales y Riparios para Anfibios y Reptiles Resumen: Los hábitats terrestres que rodean humedales son críticos para el manejo de recursos naturales. Aunque la protección de recursos acuáticos contra actividades humanas como agricultura, silvicultura y desarrollo urbano es obvia, también es aparente que las áreas terrestres que rodean a humedales son hábitat núcleo para muchas especies semiacuáticas que dependen de los ecotonos mésicos para completar sus ciclos de vida. Para propósitos de conservación y manejo, es importante definir los hábitats núcleo utilizados por las poblaciones reproductivas locales alrededor de humedales. Nuestro objetivo fue proporcionar una estimación del tamaño biológicamente relevante de los hábitats núcleo alrededor de humedales para anfibios y reptiles. Resumimos datos de la literatura sobre el uso de hábitats terrestres por anfibios y reptiles asociados con humedales (19 especies de ranas y 13 de salamandras, representando a 1363 individuos; 5 especies de serpientes y 28 de tortugas representando a más de 2245 individuos). Los hábitats núcleo terrestres variaron de 159 a 290 m para anfibios y de 127 a 289 para reptiles desde el borde del sitio acuático. Datos de estos estudios también indicaron la importancia de los hábitats terrestres para alimentación, hibernación y anidación, y, por lo tanto, que la interdependencia biológica entre hábitats acuáticos y terrestres es esencial para la persistencia de poblaciones. Dependiendo del nivel de protección requerida, se pueden utilizar los valores mínimos y máximos de hábitats núcleo para definir amortiguamientos biológicamente significativos para hábitats de humedales y riparios. Estos resultados indican que extensas áreas de hábitats terrestres que rodean humedales son críticas para el mantenimiento de la biodiversidad. Paper submitted November 24, 2002; revised manuscript accepted January 28, 2003. 1220 Buffer Zones for Wetlands and Riparian Habitats Semlitsch & Bodie Conservation Biology Volume 17, No. 5, October 2003 Introduction Terrestrial habitats surrounding wetlands are critical for the management of water and wildlife resources. It is well established that these terrestrial habitats are the sites of physical and chemical filtration processes that protect water resources (e.g., drinking water, fisheries) from siltation, chemical pollution, and increases in water temperature caused by human activities such as agriculture, silviculture, and urban development (e.g., Lowrance et al. 1984; Forsythe & Roelle 1990). It is generally acknowledged that terrestrial buffers or riparian strips 30–60 m wide will effectively protect water resources (e.g., Lee & Samuel 1976; Phillips 1989; Hartman & Scrivener 1990; Davies & Nelson 1994; Brosofske et al. 1997). However, terrestrial habitats surrounding wetlands are important to more than just the protection of water resources. They are also essential to the conservation and management of semiaquatic species. In the last few years, a number of studies have documented the use of terrestrial habitats adjacent to wetlands by a broad range of taxa, including mammals, birds, reptiles, and amphibians ( e.g., Rudolph & Dickson 1990; McComb et al. 1993; Darveau et al. 1995; Spackman & Hughes 1995; Hodges & Krementz 1996; Semlitsch 1998; Bodie 2001; Darveau et al. 2001 ). These studies have shown the close dependence of semiaquatic species, such as amphibians and reptiles, on terrestrial habitats for critical life-history functions. For example, amphibians, such as frogs and salamanders, breed and lay eggs in wetlands during short breeding seasons lasting only a few days or weeks and during the remainder of the year emigrate to terrestrial habitats to forage and overwinter (e.g., Madison 1997; Richter et al. 2001). Reptiles, such as turtles and snakes, often live and forage in aquatic habitats most of the year but emigrate to upland habitats to nest or overwinter (e.g., Gibbons et al. 1977; Semlitsch et al. 1988; Burke & Gibbons 1995; Bodie 2001). The biological importance of these habitats in maintaining biodiversity is obvious, yet criteria by which to define habitats and regulations to protect them are ambiguous or lacking (Semlitsch & Bodie 1998; Semlitsch & Jensen 2001). More importantly, a serious gap is created in biodiversity protection when regulations or ordinances, especially those of local or state governments, have been set based on criteria to protect water resources alone, without considering habitats critical to wildlife species. Further, the aquatic and terrestrial habitats needed to carry out life-history functions are essential and are defined here as “core habitats.” No summaries of habitat use by amphibians and reptiles exist to estimate the biologically relevant size of core habitats surrounding wetlands that are needed to protect biodiversity. For conservation and management, it is important to define and distinguish core habitats used by local breeding populations surrounding wetlands. For example, adult frogs, salamanders, and turtles are generally philopatric to individual wetlands and migrate annually between aquatic and terrestrial habitats to forage, reproduce, and overwinter ( e.g., Burke & Gibbons 1995; Semlitsch 1998). The amount of terrestrial habitats used during migrations to and from wetlands and for foraging defines the terrestrial core habitat of a population. This aggregation of breeding adults constitutes a local population centered on a single wetland or wetland complex. Local populations are connected by dispersal and are part of a larger metapopulation, which extends across the landscape (Pulliam 1988; Marsh & Trenham 2001). Annual migrations centered on a single wetland or wetland complex are biologically different than dispersal to new breeding sites. It is thought that dispersal among populations is achieved primarily by juveniles for amphibians ( e.g., Gill 1978; Breden 1987; Berven & Grudzien 1990) or by males for turtles (e.g., Morreale et al. 1984). Dispersal by juvenile amphibians tends to be unidirectional and longer in distance than the annual migratory movements of breeding adults ( e.g., Breden 1987; Seburn et al. 1997 ). Thus, habitats adjacent to wetlands can serve as stopping points and corridors for dispersal to other nearby wetlands. Ultimately, conservation and management plans must consider both local and landscape dynamics (Semlitsch 2000), but core habitats for local populations need to be defined before issues of connectivity at the metapopulation level are considered.", "title": "" }, { "docid": "db26d71ec62388e5367eb0f2bb45ad40", "text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th", "title": "" }, { "docid": "b6634563103c752e961f6ff32759922b", "text": "Among several biometric traits possible to be used for people identification, fingerprint is still the most used. Current automated fingerprint identification systems are based on ridge patterns and minutiae, classified as first and second level features, respectively. However, the development of new fingerprint sensors and the growing demand for more secure systems are leading to the use of additional discriminative fingerprint characteristics known as third level features, such as the sweat pores. Recent researches on fingerprint recognition have focused on fingerprint fragments, in which methods based only on first and second level features tend to obtain low recognition rates. This paper proposes a robust method developed for fingerprint recognition from fingerprint fragments based on ridges and sweat pores. We have extended a ridgebased fingerprint recognition method previously proposed in the literature, based on Hough Transform, by incorporating sweat pores information in the matching step. Experimental results showed that although the reduction of Equal Error Rate is modest, a significant improvement was observed when analyzing the FMR100 and FMR1000 metrics, which are more suitable for high security applications. For these two metrics, the proposed approach obtained a reduction superior to 10% of the rates, when compared to the original ridge-based approach. Keywords-biometrics; fingerprints; ridges; sweat pores", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "c182be9222690ffe1c94729b2b79d8ed", "text": "A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.", "title": "" }, { "docid": "95df9ceddf114060d981415c0b1d6125", "text": "This paper presents a comparative study of different neural network models for forecasting the weather of Vancouver, British Columbia, Canada. For developing the models, we used one year’s data comprising of daily maximum and minimum temperature, and wind-speed. We used Multi-Layered Perceptron (MLP) and an Elman Recurrent Neural Network (ERNN), which were trained using the one-step-secant and LevenbergMarquardt algorithms. To ensure the effectiveness of neurocomputing techniques, we also tested the different connectionist models using a different training and test data set. Our goal is to develop an accurate and reliable predictive model for weather analysis. Radial Basis Function Network (RBFN) exhibits a good universal approximation capability and high learning convergence rate of weights in the hidden and output layers. Experimental results obtained have shown RBFN produced the most accurate forecast model as compared to ERNN and MLP networks.", "title": "" }, { "docid": "14b15f15cb7dbb3c19a09323b4b67527", "text": " Establishing mechanisms for sharing knowledge and technology among experts in different fields related to automated de-identification and reversible de-identification  Providing innovative solutions for concealing, or removal of identifiers while preserving data utility and naturalness  Investigating reversible de-identification and providing a thorough analysis of security risks of reversible de-identification  Providing a detailed analysis of legal, ethical and social repercussion of reversible/non-reversible de-identification  Promoting and facilitating the transfer of knowledge to all stakeholders (scientific community, end-users, SMEs) through workshops, conference special sessions, seminars and publications", "title": "" }, { "docid": "141e3ad8619577140f02a1038981ecb2", "text": "Sponges are sessile benthic filter-feeding animals, which harbor numerous microorganisms. The enormous diversity and abundance of sponge associated bacteria envisages sponges as hot spots of microbial diversity and dynamics. Many theories were proposed on the ecological implications and mechanism of sponge-microbial association, among these, the biosynthesis of sponge derived bioactive molecules by the symbiotic bacteria is now well-indicated. This phenomenon however, is not exhibited by all marine sponges. Based on the available reports, it has been well established that the sponge associated microbial assemblages keep on changing continuously in response to environmental pressure and/or acquisition of microbes from surrounding seawater or associated macroorganisms. In this review, we have discussed nutritional association of sponges with its symbionts, interaction of sponges with other eukaryotic organisms, dynamics of sponge microbiome and sponge-specific microbial symbionts, sponge-coral association etc.", "title": "" }, { "docid": "69f3a41f7250377b2d99aa61249db37e", "text": "In this paper, a fuzzy ontology and its application to news summarization are presented. The fuzzy ontology with fuzzy concepts is an extension of the domain ontology with crisp concepts. It is more suitable to describe the domain knowledge than domain ontology for solving the uncertainty reasoning problems. First, the domain ontology with various events of news is predefined by domain experts. The document preprocessing mechanism will generate the meaningful terms based on the news corpus and the Chinese news dictionary defined by the domain expert. Then, the meaningful terms will be classified according to the events of the news by the term classifier. The fuzzy inference mechanism will generate the membership degrees for each fuzzy concept of the fuzzy ontology. Every fuzzy concept has a set of membership degrees associated with various events of the domain ontology. In addition, a news agent based on the fuzzy ontology is also developed for news summarization. The news agent contains five modules, including a retrieval agent, a document preprocessing mechanism, a sentence path extractor, a sentence generator, and a sentence filter to perform news summarization. Furthermore, we construct an experimental website to test the proposed approach. The experimental results show that the news agent based on the fuzzy ontology can effectively operate for news summarization.", "title": "" }, { "docid": "063389c654f44f34418292818fc781e7", "text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.", "title": "" }, { "docid": "255f4d19d89e9ff7acb6cca900fe9ed6", "text": "Objectives: Burnout syndrome (B.S.) affects millions of workers around the world, having a significant impact on their quality of life and the services they provide. It’s a psycho-social phenomenon, which can be handled through emotional management and psychological help. Emotional Intelligence (E.I) is very important to emotional management. This paper aims to investigate the relationship between Burnout syndrome and Emotional Intelligence in health professionals occupied in the sector of rehabilitation. Methods: The data were collected from a sample of 148 healthcare professionals, workers in the field of rehabilitation, who completed Maslach Burnout Inventory questionnaire, Trait Emotional Intelligence Que-Short Form questionnaire and a questionnaire collecting demographic data as well as personal and professional information. Simple linear regression and multiple regression analyses were conducted to analyze the data. Results: The results indicated that there is a positive relationship between Emotional Intelligence and Burnout syndrome as Emotional Intelligence acts protectively against Burnout syndrome and even reduces it. In particular, it was found that the higher the Emotional Intelligence, the lower the Burnout syndrome. Also, among all factors of Emotional Intelligence, “Emotionality”, seems to influence Burnout syndrome the most, as, the higher the rate of Emotionality, the lower the rate of Burnout. At the same time, evidence was found on the variability of Burnout syndrome through various models of explanation and correlation between Burnout syndrome and Emotional Intelligence and also, Burnout syndrome and Emotional Intelligence factors. Conclusion: Employers could focus on building emotional relationships with their employees, especially in the health care field. Furthermore, they could also promote some experimental seminars, sponsored by public or private institutions, in order to enhance Emotional Intelligence and to improve the workers’ quality of life and the quality of services they provide.", "title": "" } ]
scidocsrr
fe29d7cd82b7c04669406cb95c494ed4
Opponent Modeling in Deep Reinforcement Learning
[ { "docid": "d65ccb1890bdc597c19d11abad6ae7af", "text": "The traditional view of agent modelling is to infer the explicit parameters of another agent’s strategy (i.e., their probability of taking each action in each situation). Unfortunately, in complex domains with high dimensional strategy spaces, modelling every parameter often requires a prohibitive number of observations. Furthermore, given a model of such a strategy, computing a response strategy that is robust to modelling error may be impractical to compute online. Instead, we propose an implicit modelling framework where agents aim to estimate the utility of a fixed portfolio of pre-computed strategies. Using the domain of heads-up limit Texas hold’em poker, this work describes an end-to-end approach for building an implicit modelling agent. We compute robust response strategies, show how to select strategies for the portfolio, and apply existing variance reduction and online learning techniques to dynamically adapt the agent’s strategy to its opponent. We validate the approach by showing that our implicit modelling agent would have won the heads-up limit opponent exploitation event in the 2011 Annual Computer Poker Competition.", "title": "" }, { "docid": "ff140197e5f96ca0f5837f2774c1825f", "text": "When an opponent with a stationary and stochastic policy is encountered in a twoplayer competitive game, model-free Reinforcement Learning (RL) techniques such as Q-learning and Sarsa(λ) can be used to learn near-optimal counter strategies given enough time. When an agent has learned such counter strategies against multiple diverse opponents, it is not trivial to decide which one to use when a new unidentified opponent is encountered. Opponent modeling provides a sound method for accomplishing this in the case where a policy has already been learned against the new opponent; the policy corresponding to the most likely opponent model can be employed. When a new opponent has never been encountered previously, an appropriate policy may not be available. The proposed solution is to use model-based RL methods in conjunction with separate environment and opponent models. The model-based RL algorithms used were Dyna-Q and value iteration (VI). The environment model allows an agent to reuse general knowledge about the game that is not tied to a specific opponent. Opponent models that are evaluated include Markov chains, Mixtures of Markov chains, and Latent Dirichlet Allocation on Markov chains. The latter two models are latent variable models, which make predictions for new opponents by estimating their latent (unobserved) parameters. In some situations, I have found that this allows good predictive models to be learned quickly for new opponents given data from previous opponents. I show cases where these models have low predictive perplexity (high accuracy) for novel opponents. In theory, these opponent models would enable modelbased RL agents to learn best response strategies in conjunction with an environment model, but converting prediction accuracy to actual game performance is non-trivial. This was not achieved with these methods for the domain, which is a two-player soccer game based on a physics simulation. Model-based RL did allow for faster learning in the game, but did not take full advantage of the opponent models. The quality of the environment model seems to be a critical factor in this situation.", "title": "" } ]
[ { "docid": "fb204d2f9965d17ed87c8fe8d1f22cdd", "text": "Are metaphors departures from a norm of literalness? According to classical rhetoric and most later theories, including Gricean pragmatics, they are. No, metaphors are wholly normal, say the Romantic critics of classical rhetoric and a variety of modern scholars ranging from hard-nosed cognitive scientists to postmodern critical theorists. On the metaphor-as-normal side, there is a broad contrast between those, like the cognitive linguists Lakoff, Talmy or Fauconnier, who see metaphor as pervasive in language because it is constitutive of human thought, and those, like the psycholinguists Glucksberg or Kintsch, or relevance theorists, who describe metaphor as emerging in the process of verbal communication. 1 While metaphor cannot be both wholly normal and a departure from normal language use, there might be distinct, though related, metaphorical phenomena at the level of thought, on the one hand, and verbal communication, on the other. This possibility is being explored (for instance) in the work of Raymond Gibbs. 2 In this chapter, we focus on the relevance-theoretic approach to linguistic metaphors.", "title": "" }, { "docid": "14d68a45e54b07efb15ef950ba92d7bc", "text": "We propose a novel hierarchical approach for text-to-image synthesis by inferring semantic layout. Instead of learning a direct mapping from text to image, our algorithm decomposes the generation process into multiple steps, in which it first constructs a semantic layout from the text by the layout generator and converts the layout to an image by the image generator. The proposed layout generator progressively constructs a semantic layout in a coarse-to-fine manner by generating object bounding boxes and refining each box by estimating object shapes inside the box. The image generator synthesizes an image conditioned on the inferred semantic layout, which provides a useful semantic structure of an image matching with the text description. Our model not only generates semantically more meaningful images, but also allows automatic annotation of generated images and user-controlled generation process by modifying the generated scene layout. We demonstrate the capability of the proposed model on challenging MS-COCO dataset and show that the model can substantially improve the image quality, interpretability of output and semantic alignment to input text over existing approaches.", "title": "" }, { "docid": "ddfd02c12c42edb2607a6f193f4c242b", "text": "We design the first Leakage-Resilient Identity-Based Encryption (LR-IBE) systems from static assumptions in the standard model. We derive these schemes by applying a hash proof technique from Alwen et.al. (Eurocrypt '10) to variants of the existing IBE schemes of Boneh-Boyen, Waters, and Lewko-Waters. As a result, we achieve leakage-resilience under the respective static assumptions of the original systems in the standard model, while also preserving the efficiency of the original schemes. Moreover, our results extend to the Bounded Retrieval Model (BRM), yielding the first regular and identity-based BRM encryption schemes from static assumptions in the standard model.\n The first LR-IBE system, based on Boneh-Boyen IBE, is only selectively secure under the simple Decisional Bilinear Diffie-Hellman assumption (DBDH), and serves as a stepping stone to our second fully secure construction. This construction is based on Waters IBE, and also relies on the simple DBDH. Finally, the third system is based on Lewko-Waters IBE, and achieves full security with shorter public parameters, but is based on three static assumptions related to composite order bilinear groups.", "title": "" }, { "docid": "5519eea017d8f69804060f5e40748b1a", "text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.", "title": "" }, { "docid": "0185bbf151e3de2cc038420380a3e877", "text": "Powder-based additive manufacturing (AM) technologies have been evaluated for use in different fields of application (aerospace, medical, etc.). Ideally, AM parts should be at least equivalent, or preferably better quality than conventionally produced parts. Manufacturing defects and their effects on the quality and performance of AM parts are a currently a major concern. It is essential to understand the defect types, their generation mechanisms, and the detection methodologies for mechanical properties evaluation and quality control. We consider the various types of microstructural features or defects, their generation mechanisms, their effect on bulk properties and the capability of existing characterisation methodologies for powder based AM parts in this work. Methods of in-situ non-destructive evaluation and the influence of defects on mechanical properties and design considerations are also reviewed. Together, these provide a framework to understand the relevant machine and material parameters, optimise the process and production, and select appropriate characterisation methods.", "title": "" }, { "docid": "698cc50558811c7af44d40ba7dbdfe6f", "text": "We show that the demand for news varies with the perceived affinity of the news organization to the consumer’s political preferences. In an experimental setting, conservatives and Republicans preferred to read news reports attributed to Fox News and to avoid news from CNN and NPR. Democrats and liberals exhibited exactly the opposite syndrome—dividing their attention equally between CNN and NPR, but avoiding Fox News. This pattern of selective exposure based on partisan affinity held not only for news coverage of controversial issues but also for relatively ‘‘soft’’ subjects such as crime and travel. The tendency to select news based on anticipated agreement was also strengthened among more politically engaged partisans. Overall, these results suggest that the further proliferation of new media and enhanced media choices may contribute to the further polarization of the news audience.", "title": "" }, { "docid": "e71bd8a43806651b412d00848821a517", "text": "Techniques for procedural generation of the graphics content have seen widespread use in multimedia over the past thirty years. It is still an active area of research with many applications in 3D modeling software, video games, and films. This thesis focuses on algorithmic generation of virtual terrains in real-time and their real-time visualization. We provide an overview of available approaches and present an extendable library for procedural terrain synthesis.", "title": "" }, { "docid": "5dfbe9036bc9fd63edc53992daf1858d", "text": "The paper reviews applications of data mining in manufacturing engineering, in particular production processes, operations, fault detection, maintenance, decision support, and product quality improvement. Customer relationship management, information integration aspects, and standardization are also briefly discussed. This review is focused on demonstrating the relevancy of data mining to manufacturing industry, rather than discussing the data mining domain in general. The volume of general data mining literature makes it difficult to gain a precise view of a target area such as manufacturing engineering, which has its own particular needs and requirements for mining applications. This review reveals progressive applications in addition to existing gaps and less considered areas such as manufacturing planning and shop floor control. DOI: 10.1115/1.2194554", "title": "" }, { "docid": "e92ab865f33c7548c21ba99785912d03", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "f99d0e24dece8b2de287b7d86c483f83", "text": "Recently, the Task Force on Process Mining released the Process Mining Manifesto. The manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active contributions from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing relevance of process mining as a bridge between data mining and business process modeling. This paper summarizes the manifesto and explains why process mining is a highly relevant, but also very challenging, research area. This way we hope to stimulate the broader ACM SIGKDD community to look at process-centric knowledge discovery.", "title": "" }, { "docid": "97270ca739c7e005da4cab41f19342e7", "text": "Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.", "title": "" }, { "docid": "693dd8eb0370259c4ee5f8553de58443", "text": "Most research in Interactive Storytelling (IS) has sought inspiration in narrative theories issued from contemporary narratology to either identify fundamental concepts or derive formalisms for their implementation. In the former case, the theoretical approach gives raise to empirical solutions, while the latter develops Interactive Storytelling as some form of “computational narratology”, modeled on computational linguistics. In this paper, we review the most frequently cited theories from the perspective of IS research. We discuss in particular the extent to which they can actually inspire IS technologies and highlight key issues for the effective use of narratology in IS.", "title": "" }, { "docid": "fba1a1296d8f3e22248e45cbe33263b5", "text": "Wi-Fi has become the de facto wireless technology for achieving short- to medium-range device connectivity. While early attempts to secure this technology have been proved inadequate in several respects, the current more robust security amendments will inevitably get outperformed in the future, too. In any case, several security vulnerabilities have been spotted in virtually any version of the protocol rendering the integration of external protection mechanisms a necessity. In this context, the contribution of this paper is multifold. First, it gathers, categorizes, thoroughly evaluates the most popular attacks on 802.11 and analyzes their signatures. Second, it offers a publicly available dataset containing a rich blend of normal and attack traffic against 802.11 networks. A quite extensive first-hand evaluation of this dataset using several machine learning algorithms and data features is also provided. Given that to the best of our knowledge the literature lacks such a rich and well-tailored dataset, it is anticipated that the results of the work at hand will offer a solid basis for intrusion detection in the current as well as next-generation wireless networks.", "title": "" }, { "docid": "5bb9ca3c14dd84f1533789c3fe4bbd10", "text": "The field of spondyloarthritis (SpA) has experienced major progress in the last decade, especially with regard to new treatments, earlier diagnosis, imaging technology and a better definition of outcome parameters for clinical trials. In the present work, the Assessment in SpondyloArthritis international Society (ASAS) provides a comprehensive handbook on the most relevant aspects for the assessments of spondyloarthritis, covering classification criteria, MRI and x rays for sacroiliac joints and the spine, a complete set of all measurements relevant for clinical trials and international recommendations for the management of SpA. The handbook focuses at this time on axial SpA, with ankylosing spondylitis (AS) being the prototype disease, for which recent progress has been faster than in peripheral SpA. The target audience includes rheumatologists, trial methodologists and any doctor and/or medical student interested in SpA. The focus of this handbook is on practicality, with many examples of MRI and x ray images, which will help to standardise not only patient care but also the design of clinical studies.", "title": "" }, { "docid": "274186e87674920bfe98044aa0208320", "text": "Message routing in mobile delay tolerant networks inherently relies on the cooperation between nodes. In most existing routing protocols, the participation of nodes in the routing process is taken as granted. However, in reality, nodes can be unwilling to participate. We first show in this paper the impact of the unwillingness of nodes to participate in existing routing protocols through a set of experiments. Results show that in the presence of even a small proportion of nodes that do not forward messages, performance is heavily degraded. We then analyze two major reasons of the unwillingness of nodes to participate, i.e., their rational behavior (also called selfishness) and their wariness of disclosing private mobility information. Our main contribution in this paper is to survey the existing related research works that overcome these two issues. We provide a classification of the existing approaches for protocols that deal with selfish behavior. We then conduct experiments to compare the performance of these strategies for preventing different types of selfish behavior. For protocols that preserve the privacy of users, we classify the existing approaches and provide an analytical comparison of their security guarantees.", "title": "" }, { "docid": "6e6237011de5348d9586fb70941b4037", "text": "BACKGROUND\nAlthough clinicians frequently add a second medication to an initial, ineffective antidepressant drug, no randomized controlled trial has compared the efficacy of this approach.\n\n\nMETHODS\nWe randomly assigned 565 adult outpatients who had nonpsychotic major depressive disorder without remission despite a mean of 11.9 weeks of citalopram therapy (mean final dose, 55 mg per day) to receive sustained-release bupropion (at a dose of up to 400 mg per day) as augmentation and 286 to receive buspirone (at a dose of up to 60 mg per day) as augmentation. The primary outcome of remission of symptoms was defined as a score of 7 or less on the 17-item Hamilton Rating Scale for Depression (HRSD-17) at the end of this study; scores were obtained over the telephone by raters blinded to treatment assignment. The 16-item Quick Inventory of Depressive Symptomatology--Self-Report (QIDS-SR-16) was used to determine the secondary outcomes of remission (defined as a score of less than 6 at the end of this study) and response (a reduction in baseline scores of 50 percent or more).\n\n\nRESULTS\nThe sustained-release bupropion group and the buspirone group had similar rates of HRSD-17 remission (29.7 percent and 30.1 percent, respectively), QIDS-SR-16 remission (39.0 percent and 32.9 percent), and QIDS-SR-16 response (31.8 percent and 26.9 percent). Sustained-release bupropion, however, was associated with a greater reduction (from baseline to the end of this study) in QIDS-SR-16 scores than was buspirone (25.3 percent vs. 17.1 percent, P<0.04), a lower QIDS-SR-16 score at the end of this study (8.0 vs. 9.1, P<0.02), and a lower dropout rate due to intolerance (12.5 percent vs. 20.6 percent, P<0.009).\n\n\nCONCLUSIONS\nAugmentation of citalopram with either sustained-release bupropion or buspirone appears to be useful in actual clinical settings. Augmentation with sustained-release bupropion does have certain advantages, including a greater reduction in the number and severity of symptoms and fewer side effects and adverse events. (ClinicalTrials.gov number, NCT00021528.).", "title": "" }, { "docid": "81a44de6f529f09e78ade5384c9b1527", "text": "Code Blue is an emergency code used in hospitals to indicate when a patient goes into cardiac arrest and needs resuscitation. When Code Blue is called, an on-call medical team staffed by physicians and nurses is paged and rushes in to try to save the patient's life. It is an intense, chaotic, and resource-intensive process, and despite the considerable effort, survival rates are still less than 20% [4]. Research indicates that patients actually start showing clinical signs of deterioration some time before going into cardiac arrest [1][2[][3], making early prediction, and possibly intervention, feasible. In this paper, we describe our work, in partnership with NorthShore University HealthSystem, that preemptively flags patients who are likely to go into cardiac arrest, using signals extracted from demographic information, hospitalization history, vitals and laboratory measurements in patient-level electronic medical records. We find that early prediction of Code Blue is possible and when compared with state of the art existing method used by hospitals (MEWS - Modified Early Warning Score)[4], our methods perform significantly better. Based on these results, this system is now being considered for deployment in hospital settings.", "title": "" }, { "docid": "8bb5a38908446ca4e6acb4d65c4c817c", "text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.", "title": "" }, { "docid": "c75095680818ccc7094e4d53815ef475", "text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.", "title": "" } ]
scidocsrr
8c6ed91a636dc9882769d0faa93bf9b8
The Affordances of Business Analytics for Strategic Decision-Making and Their Impact on Organisational Performance
[ { "docid": "ba4121003eb56d3ab6aebe128c219ab7", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" } ]
[ { "docid": "879b58634bd71c8eee6c37350c196dc3", "text": "This paper presents a novel high-voltage gain boost converter topology based on the three-state commutation cell for battery charging using PV panels and a reduced number of conversion stages. The presented converter operates in zero-voltage switching (ZVS) mode for all switches. By using the new concept of single-stage approaches, the converter can generate a dc bus with a battery bank or a photovoltaic panel array, allowing the simultaneous charge of the batteries according to the radiation level. The operation principle, design specifications, and experimental results from a 500-W prototype are presented in order to validate the proposed structure.", "title": "" }, { "docid": "2ae773f548c1727a53a7eb43550d8063", "text": "Today's Internet hosts are threatened by large-scale distributed denial-of-service (DDoS) attacks. The path identification (Pi) DDoS defense scheme has recently been proposed as a deterministic packet marking scheme that allows a DDoS victim to filter out attack packets on a per packet basis with high accuracy after only a few attack packets are received (Yaar , 2003). In this paper, we propose the StackPi marking, a new packet marking scheme based on Pi, and new filtering mechanisms. The StackPi marking scheme consists of two new marking methods that substantially improve Pi's incremental deployment performance: Stack-based marking and write-ahead marking. Our scheme almost completely eliminates the effect of a few legacy routers on a path, and performs 2-4 times better than the original Pi scheme in a sparse deployment of Pi-enabled routers. For the filtering mechanism, we derive an optimal threshold strategy for filtering with the Pi marking. We also develop a new filter, the PiIP filter, which can be used to detect Internet protocol (IP) spoofing attacks with just a single attack packet. Finally, we discuss in detail StackPi's compatibility with IP fragmentation, applicability in an IPv6 environment, and several other important issues relating to potential deployment of StackPi", "title": "" }, { "docid": "e71402bed9c526d9152885ef86c30bb5", "text": "Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes. We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes. We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics. We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.", "title": "" }, { "docid": "9a6e7b49ddfa98520af1bb33bfb5fafa", "text": "Spell Description Schl Comp Time Range Target, Effect, Area Duration Save SR PHB £ Acid Fog Fog deals 2d6/rnd acid damage Conj V,S,M/DF 1 a Medium 20-ft radius 1 rnd/lvl-196 £ Acid Splash Acid Missile 1d3 damage Conj V,S 1 a Close Acid missile Instantaneous-196 £ Aid +1 att,+1 fear saves,1d8 +1/lvl hps Ench V,S,DF 1 a Touch One living creature 1 min/lvl-Yes 196 £ Air Walk Target treads on air as if solid Trans V,S,DF 1 a Touch One creature 10 min/lvl-Yes 196 £ Alarm Wards an area for 2 hr/lvl Abjur V,S,F/DF 1 a Close 20-ft radius 2 hr/lvl (D)-197 £ Align Weapon Adds alignment to weapon Trans V,S,DF 1 a Touch Weapon 1 min/lvl Will negs Yes 197 £ Alter Self Changes appearance Trans V,S 1 a Self Caster, +10 disguise 10 min/lvl (D)-197 £ Analyze Dweomer Reveals magical aspects of target Div V,S,F 1 a Close Item or creature/lvl 1 rnd/lvl (D) Will negs-197 £ Animal Growth Animal/2 lvls increases size category Trans V,S 1 a Medium 1 animal/2 lvls 1 min/lvl Fort negs Yes 198 £ Animal Messenger Send a tiny animal to specific place Ench V,S,M 1 a Close One tiny animal 1 day/lvl-Yes 198 £ Animal Shapes 1 ally/lvl polymorphs into animal Trans V,S,DF 1 a Close One creature/lvl 1 hr/lvl (D)-Yes 198 £ Animal Trance Fascinates 2d6 HD of animals Ench V,S 1 a Close Animals, Int 1 or 2 Conc Will negs Yes 198 £ Animate Dead Creates skeletons and zombies Necro V,S,M 1 a Touch Max 2HD/lvl Instantaneous-198 £ Animate Objects Items attack your foes Trans V,S 1 a Medium One small item/lvl 1 rnd/lvl-199 £ Animate Plants Animated plant Trans V 1 a Close 1 plant/3lvls 1 rnd/lvl-199 £ Animate Rope Rope moves at your command Trans V,S 1 a Medium 1 ropelike item 1 rnd/lvl-199 £ Antilife Shell 10-ft field excludes living creatures Abjur V,S,DF Round 10-ft 10-ft radius 10 min/lvl (D)-Yes 199 £ Antimagic Field Negates magic within 10-ft Abjur V,S,M/DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Sp 200 £ Antipathy Item or location repels creatures Ench V,S,M/DF 1 hr Close Location or item 2 hr/lvl (D) Will part Yes 200 £ Antiplant Shell Barrier protects against plants Abjur V,S,DF 1 a 10-ft 10-ft radius 10 min/lvl (D)-Yes 200 £ Arcane Eye Floating eye, moves 30ft/rnd Div V,S,M 10 min Unlimited Magical sensor 1 min/lvl (D)-200 …", "title": "" }, { "docid": "de6e139d0b5dc295769b5ddb9abcc4c6", "text": "1 Abd El-Moniem M. Bayoumi is a graduate TA at the Department of Computer Engineering, Cairo University. He received his BS degree in from Cairo University in 2009. He is currently an RA, working for a research project on developing an innovative revenue management system for the hotel business. He was awarded the IEEE CIS Egypt Chapter’s special award for his graduation project in 2009. Bayoumi is interested to research in machine learning and business analytics; and he is currently working on his MS on stock market prediction.", "title": "" }, { "docid": "1b60ded506c85edd798fe0759cce57fa", "text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.", "title": "" }, { "docid": "15cfa9005e68973cbca60f076180b535", "text": "Much of the literature on fair classifiers considers the case of a single classifier used once, in isolation. We initiate the study of composition of fair classifiers. In particular, we address the pitfalls of näıve composition and give general constructions for fair composition. Focusing on the individual fairness setting proposed in [Dwork, Hardt, Pitassi, Reingold, Zemel, 2011], we also extend our results to a large class of group fairness definitions popular in the recent literature. We exhibit several cases in which group fairness definitions give misleading signals under composition and conclude that additional context is needed to evaluate both group and individual fairness under composition.", "title": "" }, { "docid": "9a73e9bc7c0dc343ad9dbe1f3dfe650c", "text": "The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "3405c4808237f8d348db27776d6e9b61", "text": "Pheochromocytomas are catecholamine-releasing tumors that can be found in an extraadrenal location in 10% of the cases. Almost half of all pheochromocytomas are now discovered incidentally during cross-sectional imaging for unrelated causes. We present a case of a paragaglioma of the organ of Zuckerkandl that was discovered incidentally during a magnetic resonance angiogram performed for intermittent claudication. Subsequent investigation with computed tompgraphy and I-123 metaiodobenzylguanine scintigraphy as well as an overview of the literature are also presented.", "title": "" }, { "docid": "fadbfcc98ad512dd788f6309d0a932af", "text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.", "title": "" }, { "docid": "4f3b91bfaa2304e78ad5cd305fb5d377", "text": "The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.", "title": "" }, { "docid": "cc3fbbff0a4d407df0736ef9d1be5dd0", "text": "The purpose of this study is to examine the effect of brand image benefits on satisfaction and loyalty intention in the context of color cosmetic product. Five brand image benefits consisting of functional, social, symbolic, experiential and appearance enhances were investigated. A survey carried out on 97 females showed that functional and appearance enhances significantly affect loyalty intention. Four of brand image benefits: functional, social, experiential, and appearance enhances are positively related to overall satisfaction. The results also indicated that overall satisfaction does influence customers' loyalty. The results imply that marketers should focus on brand image benefits in their effort to achieve customer loyalty.", "title": "" }, { "docid": "f07c06a198547aa576b9a6350493e6d4", "text": "In this paper we examine the diffusion of competing rumors in social networks. Two players select a disjoint subset of nodes as initiators of the rumor propagation, seeking to maximize the number of persuaded nodes. We use concepts of game theory and location theory and model the selection of starting nodes for the rumors as a strategic game. We show that computing the optimal strategy for both the first and the second player is NP-complete, even in a most restricted model. Moreover we prove that determining an approximate solution for the first player is NP-complete as well. We analyze several heuristics and show that—counter-intuitively—being the first to decide is not always an advantage, namely there exist networks where the second player can convince more nodes than the first, regardless of the first player’s decision.", "title": "" }, { "docid": "186145f38fd2b0e6ff41bb50cdeace13", "text": "Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, automatic sarcasm detection has witnessed great interest from the sentiment analysis community. This article is a compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pattern extraction to identify implicit sentiment, use of hashtag-based supervision, and incorporation of context beyond target text. In this article, we describe datasets, approaches, trends, and issues in sarcasm detection. We also discuss representative performance values, describe shared tasks, and provide pointers to future work, as given in prior works. In terms of resources to understand the state-of-the-art, the survey presents several useful illustrations—most prominently, a table that summarizes past papers along different dimensions such as the types of features, annotation techniques, and datasets used.", "title": "" }, { "docid": "ee141b7fd5c372fb65d355fe75ad47af", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "ad56422f7dc5c9ebf8451e17565a79e8", "text": "Morphological changes of retinal vessels such as arteriovenous (AV) nicking are signs of many systemic diseases. In this paper, an automatic method for AV-nicking detection is proposed. The proposed method includes crossover point detection and AV-nicking identification. Vessel segmentation, vessel thinning, and feature point recognition are performed to detect crossover point. A method of vessel diameter measurement is proposed with processing of removing voids, hidden vessels and micro-vessels in segmentation. The AV-nicking is detected based on the features of vessel diameter measurement. The proposed algorithms have been tested using clinical images. The results show that nicking points in retinal images can be detected successfully in most cases.", "title": "" }, { "docid": "ac657141ed547f870ad35d8c8b2ba8f5", "text": "Induced by “big data,” “topic modeling” has become an attractive alternative to mapping cowords in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument (“The Leiden Manifesto”) and then upscale to a sample of moderate size (n = 687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.", "title": "" }, { "docid": "a0547eae9a2186d4c6f1b8307317f061", "text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.", "title": "" } ]
scidocsrr
ef3c20dc9ab787e25e77ba60675f2ca6
A Memetic Fingerprint Matching Algorithm
[ { "docid": "0e2d6ebfade09beb448e9c538dadd015", "text": "Matching incomplete or partial fingerprints continues to be an important challenge today, despite the advances made in fingerprint identification techniques. While the introduction of compact silicon chip-based sensors that capture only part of the fingerprint has made this problem important from a commercial perspective, there is also considerable interest in processing partial and latent fingerprints obtained at crime scenes. When the partial print does not include structures such as core and delta, common matching methods based on alignment of singular structures fail. We present an approach that uses localized secondary features derived from relative minutiae information. A flow network-based matching technique is introduced to obtain one-to-one correspondence of secondary features. Our method balances the tradeoffs between maximizing the number of matches and minimizing total feature distance between query and reference fingerprints. A two-hidden-layer fully connected neural network is trained to generate the final similarity score based on minutiae matched in the overlapping areas. Since the minutia-based fingerprint representation is an ANSI-NIST standard [American National Standards Institute, New York, 1993], our approach has the advantage of being directly applicable to existing databases. We present results of testing on FVC2002’s DB1 and DB2 databases. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "b21c6ab3b97fd23f8fe1f8645608b29f", "text": "Daily activity recognition can help people to maintain a healthy lifestyle and robot to better interact with users. Robots could therefore use the information coming from the activities performed by users to give them some custom hints to improve lifestyle and daily routine. The pervasiveness of smart things together with advances in cloud robotics can help the robot to perceive and collect more information about the users and the environment. In particular thanks to the miniaturization and low cost of Inertial Measurement Units, in the last years, body-worn activity recognition has gained popularity. In this work, we investigated the performances with an unsupervised approach to recognize eight different gestures performed in daily living wearing a system composed of two inertial sensors placed on the hand and on the wrist. In this context our aim is to evaluate whether the system is able to recognize the gestures in more realistic applications, where is not possible to have a training set. The classification problem was analyzed using two unsupervised approaches (K-Mean and Gaussian Mixture Model), with an intra-subject and an inter-subject analysis, and two supervised approaches (Support Vector Machine and Random Forest), with a 10-fold cross validation analysis and with a Leave-One-Subject-Out analysis to compare the results. The outcomes show that even in an unsupervised context the system is able to recognize the gestures with an averaged accuracy of 0.917 in the K-Mean inter-subject approach and 0.796 in the Gaussian Mixture Model inter-subject one.", "title": "" }, { "docid": "7021db9b0e77b2df2576f0cc5eda8d7d", "text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.", "title": "" }, { "docid": "2d30ed139066b025dcb834737d874c99", "text": "Considerable advances have occurred in recent years in the scientific knowledge of the benefits of breastfeeding, the mechanisms underlying these benefits, and in the clinical management of breastfeeding. This policy statement on breastfeeding replaces the 1997 policy statement of the American Academy of Pediatrics and reflects this newer knowledge and the supporting publications. The benefits of breastfeeding for the infant, the mother, and the community are summarized, and recommendations to guide the pediatrician and other health care professionals in assisting mothers in the initiation and maintenance of breastfeeding for healthy term infants and high-risk infants are presented. The policy statement delineates various ways in which pediatricians can promote, protect, and support breastfeeding not only in their individual practices but also in the hospital, medical school, community, and nation.", "title": "" }, { "docid": "92fdbab17be68e94b2033ef79b41cf0c", "text": "Areas of convergence and divergence between the Narcissistic Personality Inventory (NPI; Raskin & Terry, 1988) and the Pathological Narcissism Inventory (PNI; Pincus et al., 2009) were evaluated in a sample of 586 college students. Summary scores for the NPI and PNI were not strongly correlated (r = .22) but correlations between certain subscales of these two inventories were larger (e.g., r = .71 for scales measuring Exploitativeness). Both measures had a similar level of correlation with the Narcissistic Personality Disorder scale from the Personality Diagnostic Questionnaire-4 (Hyler, 1994) (r = .40 and .35, respectively). The NPI and PNI diverged, however, with respect to their associations with Explicit Self-Esteem. Selfesteem was negatively associated with the PNI but positively associated with the NPI (r = .34 versus r = .26). Collectively, the results highlight the need for precision when discussing the personality characteristics associated with narcissism. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ce29ddfd7b3d3a28ddcecb7a5bb3ac8e", "text": "Steganography consist of concealing secret information in a cover object to be sent over a public communication channel. It allows two parties to share hidden information in a way that no intruder can detect the presence of hidden information. This paper presents a novel steganography approach based on pixel location matching of the same cover image. Here the information is not directly embedded within the cover image but a sequence of 4 bits of secret data is compared to the 4 most significant bits (4MSB) of the cover image pixels. The locations of the matching pixels are taken to substitute the 2 least significant bits (2LSB) of the cover image pixels. Since the data are not directly hidden in cover image, the proposed approach is more secure and difficult to break. Intruders cannot intercept it by using common LSB techniques.", "title": "" }, { "docid": "818c075d79a51fcab4c38031f14a98ef", "text": "This paper presents a statistical approach to collaborative ltering and investigates the use of latent class models for predicting individual choices and preferences based on observed preference behavior. Two models are discussed and compared: the aspect model, a probabilistic latent space model which models individual preferences as a convex combination of preference factors, and the two-sided clustering model, which simultaneously partitions persons and objects into clusters. We present EM algorithms for di erent variants of the aspect model and derive an approximate EM algorithmbased on a variational principle for the two-sided clustering model. The bene ts of the di erent models are experimentally investigated on a large movie data set.", "title": "" }, { "docid": "83e50a2c76217f60057d8bf680a12b92", "text": "[1] Luo, Z. X., Zhou, X. C., David XianFeng, G. U. (2014). From a projective invariant to some new properties of algebraic hypersurfaces.Science China Mathematics, 57(11), 2273-2284. [2] Fan, B., Wu, F., Hu, Z. (2010). Line matching leveraged by point correspondences. IEEE Conference on Computer Vision & Pattern Recognition (Vol.238, pp.390-397). [3] Fan, B., Wu, F., & Hu, Z. (2012). Robust line matching through line–point invariants. Pattern Recognition, 45(2), 794-805. [4] López, J., Santos, R., Fdez-Vidal, X. R., & Pardo, X. M. (2015). Two-view line matching algorithm based on context and appearance in low-textured images. Pattern Recognition, 48(7), 2164-2184. Dalian University of Technology Qi Jia, Xinkai Gao, Xin Fan*, Zhongxuan Luo, Haojie Li,and Ziyao Chen Novel Coplanar Line-points Invariants for Robust Line Matching Across Views", "title": "" }, { "docid": "61dcc07734c98bf0ad01a98fe0c55bf4", "text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.", "title": "" }, { "docid": "5a91b2d8611b14e33c01390181eb1891", "text": "Rapidly expanding volume of publications in the biomedical domain makes it increasingly difficult for a timely evaluation of the latest literature. That, along with a push for automated evaluation of clinical reports, present opportunities for effective natural language processing methods. In this study we target the problem of named entity recognition, where texts are processed to annotate terms that are relevant for biomedical studies. Terms of interest in the domain include gene and protein names, and cell lines and types. Here we report on a pipeline built on Embeddings from Language Models (ELMo) and a deep learning package for natural language processing (AllenNLP). We trained context-aware token embeddings on a dataset of biomedical papers using ELMo, and incorporated these embeddings in the LSTM-CRF model used by AllenNLP for named entity recognition. We show these representations improve named entity recognition for different types of biomedical named entities. We also achieve a new state of the art in gene mention detection on the BioCreative II gene mention shared task.", "title": "" }, { "docid": "e93517eb28df17dddfc63eb7141368f9", "text": "Domain transfer learning generalizes a learning model across training data and testing data with different distributions. A general principle to tackle this problem is reducing the distribution difference between training data and testing data such that the generalization error can be bounded. Current methods typically model the sample distributions in input feature space, which depends on nonlinear feature mapping to embody the distribution discrepancy. However, this nonlinear feature space may not be optimal for the kernel-based learning machines. To this end, we propose a transfer kernel learning (TKL) approach to learn a domain-invariant kernel by directly matching source and target distributions in the reproducing kernel Hilbert space (RKHS). Specifically, we design a family of spectral kernels by extrapolating target eigensystem on source samples with Mercer's theorem. The spectral kernel minimizing the approximation error to the ground truth kernel is selected to construct domain-invariant kernel machines. Comprehensive experimental evidence on a large number of text categorization, image classification, and video event recognition datasets verifies the effectiveness and efficiency of the proposed TKL approach over several state-of-the-art methods.", "title": "" }, { "docid": "77cea98467305b9b3b11de8d3cec6ec2", "text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.", "title": "" }, { "docid": "8405b35a36235ba26444655a3619812d", "text": "Studying the reason why single-layer molybdenum disulfide (MoS2) appears to fall short of its promising potential in flexible nanoelectronics, we find that the nature of contacts plays a more important role than the semiconductor itself. In order to understand the nature of MoS2/metal contacts, we perform ab initio density functional theory calculations for the geometry, bonding, and electronic structure of the contact region. We find that the most common contact metal (Au) is rather inefficient for electron injection into single-layer MoS2 and propose Ti as a representative example of suitable alternative electrode materials.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "a5cb288b5a2f29c22a9338be416a27f7", "text": "L ^ N C O U R A G I N G CHILDREN'S INTRINSIC MOTIVATION CAN HELP THEM TO ACHIEVE ACADEMIC SUCCESS (ADELMAN, 1978; ADELMAN & TAYLOR, 1986; GOTTFRIED, 1 9 8 3 , 1 9 8 5 ) . TO HELP STUDENTS WITH AND WITHOUT LEARNING DISABILITIES TO DEVELOP ACADEMIC INTRINSIC MOTIVATION, IT IS IMPORTANT TO DEFINE THE FACTORS THAT AFFECT MOTIVATION (ADELMAN & CHANEY, 1 9 8 2 ; ADELMAN & TAYLOR, 1983). T H I S ARTICLE OFFERS EDUCATORS AN INSIGHT INTO THE EFFECTS OF DIFFERENT MOTIVATIONAL ORIENTATIONS ON THE SCHOOL LEARNING OF STUDENTS WITH LEARNING DISABILITIES, AS W E L L AS INTO THE VARIABLES AFFECTING INTRINSIC AND EXTRINSIC MOTIVATION. ALSO INCLUDED ARE RECOMMENDATIONS, BASED ON EMPIRICAL EVIDENCE, FOR ENHANCING ACADEMIC INTRINSIC MOTIVATION IN LEARNERS OF VARYING ABIL IT IES AT A L L GRADE LEVELS. I .NTEREST IN THE VARIOUS ASPECTS OF INTRINSIC and extrinsic motivation has accelerated in recent years. Motivational orientation is considered to be an important factor in determining the academic success of children with and without disabilities (Adelman & Taylor, 1986; Calder & Staw, 1975; Deci, 1975; Deci & Chandler, 1986; Schunk, 1991). Academic intrinsic motivation has been found to be significantly correlated with academic achievement in students with learning disabilities (Gottfried, 1985) and without learning disabilities (Adelman, 1978; Adelman & Taylor, 1983). However, children with learning disabilities (LD) are less likely than their nondisabled peers to be intrinsically motivated (Adelman & Chaney, 1982; Adelman & Taylor, 1986; Mastropieri & Scruggs, 1994; Smith, 1994). Students with LD have been found to have more positive attitudes toward school than toward school learning (Wilson & David, 1994). Wilson and David asked 89 students with LD to respond to items on the School Attitude Measures (SAM; Wick, 1990) and on the Children's Academic Intrinsic Motivation Inventory (CAIMI; Gottfried, 1986). The students with L D were found to have a more positive attitude toward the school environment than toward academic tasks. Research has also shown that students with LD may derive their self-perceptions from areas other than school, and do not see themselves as less competent in areas of school learning (Grolnick & Ryan, 1990). Although there is only a limited amount of research available on intrinsic motivation in the population with special needs (Adelman, 1978; Adelman & Taylor, 1986; Grolnick & Ryan, 1990), there is an abundance of research on the general school-age population. This article is an at tempt to use existing research to identify variables pertinent to the academic intrinsic motivation of children with learning disabilities. The first part of the article deals with the definitions of intrinsic and extrinsic motivation. The next part identifies some of the factors affecting the motivational orientation and subsequent academic achievement of school-age children. This is followed by empirical evidence of the effects of rewards on intrinsic motivation, and suggestions on enhancing intrinsic motivation in the learner. At the end, several strategies are presented that could be used by the teacher to develop and encourage intrinsic motivation in children with and without LD. l O R E M E D I A L A N D S P E C I A L E D U C A T I O N Volume 18. Number 1, January/February 1997, Pages 12-19 D E F I N I N G M O T I V A T I O N A L A T T R I B U T E S Intrinsic Motivation Intrinsic motivation has been defined as (a) participation in an activity purely out of curiosity, that is, from a need to know more about something (Deci, 1975; Gottfried, 1983; Woolfolk, 1990); (b) the desire to engage in an activity purely for the sake of participating in and completing a task (Bates, 1979; Deci, Vallerand, Pelletier, & Ryan, 1991); and (c) the desire to contribute (Mills, 1991). Academic intrinsic motivation has been measured by (a) the ability of the learner to persist with the task assigned (Brophy, 1983; Gottfried, 1983); (b) the amount of time spent by the student on tackling the task (Brophy, 1983; Gottfried, 1983); (c) the innate curiosity to learn (Gottfried, 1983); (d) the feeling of efficacy related to an activity (Gottfried, 1983; Schunk, 1991; Smith, 1994); (e) the desire to select an activity (Brophy, 1983); and (f) a combination of all these variables (Deci, 1975; Deci & Ryan, 1985). A student who is intrinsically motivated will persist with the assigned task, even though it may be difficult (Gottfried, 1983; Schunk, 1990), and will not need any type of reward or incentive to initiate or complete a task (Beck, 1978; Deci, 1975; Woolfolk, 1990). This type of student is more likely to complete the chosen task and be excited by the challenging nature of an activity. The intrinsically motivated student is also more likely to retain the concepts learned and to feel confident about tackling unfamiliar learning situations, like new vocabulary words. However, the amount of interest generated by the task also plays a role in the motivational orientation of the learner. An assigned task with zero interest value is less likely to motivate the student than is a task that arouses interest and curiosity. Intrinsic motivation is based in the innate, organismic needs for competence and self-determination (Deci & Ryan, 1985; Woolfolk, 1990), as well as the desire to seek and conquer challenges (Adelman & Taylor, 1990). People are likely to be motivated to complete a task on the basis of their level of interest and the nature of the challenge. Research has suggested that children with higher academic intrinsic motivation function more effectively in school (Adelman & Taylor, 1990; Boggiano & Barrett, 1992; Gottfried, 1990; Soto, 1988). Besides innate factors, there are several other variables that can affect intrinsic motivation. Extrinsic Motivation Adults often give the learner an incentive to participate in or to complete an activity. The incentive might be in the form of a tangible reward, such as money or candy. Or, it might be the likelihood of a reward in the future, such as a good grade. Or, it might be a nontangible reward, for example, verbal praise or a pat on the back. The incentive might also be exemption from a less liked activity or avoidance of punishment. These incentives are extrinsic motivators. A person is said to be extrinsically motivated when she or he undertakes a task purely for the sake of attaining a reward or for avoiding some punishment (Adelman & Taylor, 1990; Ball, 1984; Beck, 1978; Deci, 1975; Wiersma, 1992; Woolfolk, 1990). Extrinsic motivation can, especially in learning and other forms of creative work, interfere with intrinsic motivation (Benninga et al., 1991; Butler, 1989; Deci, 1975; McCullers, Fabes, & Moran, 1987). In such cases, it might be better not to offer rewards for participating in or for completing an activity, be it textbook learning or an organized play activity. Not only teachers but also parents have been found to negatively influence the motivational orientation of the child by providing extrinsic consequences contingent upon their school performance (Gottfried, Fleming, & Gottfried, 1994). The relationship between rewards (and other extrinsic factors) and the intrinsic motivation of the learner is outlined in the following sections. MOTIVATION AND THE LEARNER In a classroom, the student is expected to tackle certain types of tasks, usually with very limited choices. Most of the research done on motivation has been done in settings where the learner had a wide choice of activities, or in a free-play setting. In reality, the student has to complete tasks that are compulsory as well as evaluated (Brophy, 1983). Children are expected to complete a certain number of assignments that meet specified criteria. For example, a child may be asked to complete five multiplication problems and is expected to get correct answers to at least three. Teachers need to consider how instructional practices are designed from the motivational perspective (Schunk, 1990). Development of skills required for academic achievement can be influenced by instructional design. If the design undermines student ability and skill level, it can reduce motivation (Brophy, 1983; Schunk, 1990). This is especially applicable to students with disabilities. Students with LD have shown a significant increase in academic learning after engaging in interesting tasks like computer games designed to enhance learning (Adelman, Lauber, Nelson, & Smith, 1989). A common aim of educators is to help all students enhance their learning, regardless of the student's ability level. To achieve this outcome, the teacher has to develop a curriculum geared to the individual needs and ability levels of the students, especially the students with special needs. If the assigned task is within the child's ability level as well as inherently interesting, the child is very likely to be intrinsically motivated to tackle the task. The task should also be challenging enough to stimulate the child's desire to attain mastery. The probability of success or failure is often attributed to factors such as ability, effort, difficulty level of the task, R E M E D I A L A N D S P E C I A L E D U C A T I O N 1 O Volume 18, Number 1, January/February 1997 and luck (Schunk, 1990). One or more of these attributes might, in turn, affect the motivational orientation of a student. The student who is sure of some level of success is more likely to be motivated to tackle the task than one who is unsure of the outcome (Adelman & Taylor, 1990). A student who is motivated to learn will find school-related tasks meaningful (Brophy, 1983, 1987). Teachers can help students to maximize their achievement by adjusting the instructional design to their individual characteristics and motivational orientation. The personality traits and motivational tendency of learners with mild handicaps can either help them to compensate for their inadequate learning abilities and enhance performanc", "title": "" }, { "docid": "262c11ab9f78e5b3f43a31ad22cf23c5", "text": "Responding to threats in the environment is crucial for survival. Certain types of threat produce defensive responses without necessitating previous experience and are considered innate, whereas other threats are learned by experiencing aversive consequences. Two important innate threats are whether an encountered stimulus is a member of the same species (social threat) and whether a stimulus suddenly appears proximal to the body (proximal threat). These threats are manifested early in human development and robustly elicit defensive responses. Learned threat, on the other hand, enables adaptation to threats in the environment throughout the life span. A well-studied form of learned threat is fear conditioning, during which a neutral stimulus acquires the ability to eliciting defensive responses through pairings with an aversive stimulus. If innate threats can facilitate fear conditioning, and whether different types of innate threats can enhance each other, is largely unknown. We developed an immersive virtual reality paradigm to test how innate social and proximal threats are related to each other and how they influence conditioned fear. Skin conductance responses were used to index the autonomic component of the defensive response. We found that social threat modulates proximal threat, but that neither proximal nor social threat modulates conditioned fear. Our results suggest that distinct processes regulate autonomic activity in response to proximal and social threat on the one hand, and conditioned fear on the other.", "title": "" }, { "docid": "82c9c8a7a9dccfa59b09df595de6235c", "text": "Honeypots are closely monitored decoys that are employed in a network to study the trail of hackers and to alert network administrators of a possible intrusion. Using honeypots provides a cost-effective solution to increase the security posture of an organization. Even though it is not a panacea for security breaches, it is useful as a tool for network forensics and intrusion detection. Nowadays, they are also being extensively used by the research community to study issues in network security, such as Internet worms, spam control, DoS attacks, etc. In this paper, we advocate the use of honeypots as an effective educational tool to study issues in network security. We support this claim by demonstrating a set of projects that we have carried out in a network, which we have deployed specifically for running distributed computer security projects. The design of our projects tackles the challenges in installing a honeypot in academic institution, by not intruding on the campus network while providing secure access to the Internet. In addition to a classification of honeypots, we present a framework for designing assignments/projects for network security courses. The three sample honeypot projects discussed in this paper are presented as examples of the framework.", "title": "" }, { "docid": "da4b2452893ca0734890dd83f5b63db4", "text": "Diabetic retinopathy is when damage occurs to the retina due to diabetes, which affects up to 80 percent of all patients who have had diabetes for 10 years or more. The expertise and equipment required are often lacking in areas where diabetic retinopathy detection is most needed. Most of the work in the field of diabetic retinopathy has been based on disease detection or manual extraction of features, but this paper aims at automatic diagnosis of the disease into its different stages using deep learning. This paper presents the design and implementation of GPU accelerated deep convolutional neural networks to automatically diagnose and thereby classify high-resolution retinal images into 5 stages of the disease based on severity. The single model accuracy of the convolutional neural networks presented in this paper is 0.386 on a quadratic weighted kappa metric and ensembling of three such similar models resulted in a score of 0.3996.", "title": "" }, { "docid": "6fdb3ae03e6443765c72197eb032f4a0", "text": "This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of environmental variability are introduced by the use of desk-top microphones and different training and testing conditions: additive noise and spectral tilt introduced by linear filtering. An important attribute of the novel compensation algorithms described in this thesis is that they provide joint rather than independent compensation for these two types of degradation. Acoustical compensation is applied in our algorithms as an additive correction in the cepstral domain. This allows a higher degree of integration within SPHINX, the Carnegie Mellon speech recognition system, that uses the cepstrum as its feature vector. Therefore, these algorithms can be implemented very efficiently. Processing in many of these algorithms is based on instantaneous signal-to-noise ratio (SNR), as the appropriate compensation represents a form of noise suppression at low SNRs and spectral equalization at high SNRs. The compensation vectors for additive noise and spectral transformations are estimated by minimizing the differences between speech feature vectors obtained from a \"standard\" training corpus of speech and feature vectors that represent the current acoustical environment. In our work this is accomplished by a minimizing the distortion of vector-quantized cepstra that are produced by the feature extraction module in SPHINX. In this dissertation we describe several algorithms including the SNR-Dependent Cepstral Normalization, (SDCN) and the Codeword-Dependent Cepstral Normalization (CDCN). With CDCN, the accuracy of SPHINX when trained on speech recorded with a close-talking microphone and tested on speech recorded with a desk-top microphone is essentially the same obtained when the system is trained and tested on speech from the desk-top microphone. An algorithm for frequency normalization has also been proposed in which the parameter of the bilinear transformation that is used by the signal-processing stage to produce frequency warping is adjusted for each new speaker and acoustical environment. The optimum value of this parameter is again chosen to minimize the vector-quantization distortion between the standard environment and the current one. In preliminary studies, use of this frequency normalization produced a moderate additional decrease in the observed error rate.", "title": "" }, { "docid": "cc5d183cae6251b73e5302b81e4589db", "text": "Digital images in the real world are created by a variety of means and have diverse properties. A photographical natural scene image (NSI) may exhibit substantially different characteristics from a computer graphic image (CGI) or a screen content image (SCI). This casts major challenges to objective image quality assessment, for which existing approaches lack effective mechanisms to capture such content type variations, and thus are difficult to generalize from one type to another. To tackle this problem, we first construct a cross-content-type (CCT) database, which contains 1,320 distorted NSIs, CGIs, and SCIs, compressed using the high efficiency video coding (HEVC) intra coding method and the screen content compression (SCC) extension of HEVC. We then carry out a subjective experiment on the database in a well-controlled laboratory environment. Moreover, we propose a unified content-type adaptive (UCA) blind image quality assessment model that is applicable across content types. A key step in UCA is to incorporate the variations of human perceptual characteristics in viewing different content types through a multi-scale weighting framework. This leads to superior performance on the constructed CCT database. UCA is training-free, implying strong generalizability. To verify this, we test UCA on other databases containing JPEG, MPEG-2, H.264, and HEVC compressed images/videos, and observe that it consistently achieves competitive performance.", "title": "" }, { "docid": "d9bd23208ab6eb8688afea408a4c9eba", "text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.", "title": "" } ]
scidocsrr
5bc573a250fceaa9d862eab5bd3fc697
Monet: A User-Oriented Behavior-Based Malware Variants Detection System for Android
[ { "docid": "55a6353fa46146d89c7acd65bee237b5", "text": "The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\\%) and an acceptable false positive rate (5.15\\%) for a vetting purpose.", "title": "" } ]
[ { "docid": "fbc97be77f713a49e5fc6b43cd0204b8", "text": "We describe the architecture of the ILEX system, • which supports opportunistic text generation. In • web-based text generation, the SYstem cannot plan the entire multi-page discourse because the user's browsing path is unpredictable. For this reason, • the system must be ready opportunistically to take • advantage of whatever path the user chooses. We describe both the nature of opportunism in ILEX's museum domain, and then show how ILEX has been designed to function in this environment. The architecture presented addresses opportunism in both content determination and sentenceplanning. 1 E x p l o i t i n g o p p o r t u n i t i e s in t e x t g e n e r a t i o n • Many models of text generation make use of standard patterns (whether expressed as schemas (e.g. [McKeown 85]) or plan operators (e.g. [Moore and Paris 93])) to break down communicative goals in such a way as to produce extended texts. Such models are making two basic assumptions: 1. Text generation is goal directed, in the sense that spans and subspans of text are designed to achieve unitary communicative goals [Grosz and Sidner 86]. 2. Although the details Of the structUre of a text may have to be tuned to particulars of the communicative situation, generally the structure is determined by the goals and their decomposition. That is, a generator •needs strategies for decomposing the achievement of complex • goals into sequences of utterances, rather than ways of combining sequences of utterances into more complex structures. Generation is \"top-down\", rather than\"bottom-up\" [Marcu 97]. Our belief is that there is an important class of NLG problems for which these basic assumptions• are not helpful. These problems all involve situations where semi-fixed explanation strategies are less useful than the ability to exploit opportunities. WordNet gives the following definition of 0pportunity': O p p o r t u n i t y : \"A possibility due to a favorable combination of circumstances\" Because • opportunities involve •combinations of circumstances, they are often unexpected and hard to predict. It may be too expensive or impossible to have complete knowledge about them. Topdown generation strategies may not be able •to exploit opportunities (except at the cost of looking for all opportunities at all• points) because it is difficult to associate classes of opportunities with fixed stages in the explanation •process. We are investigating opportunistic text generation in the Intelligent Labelling Explorer (ILEX) project, which seeks automatically to generate a sequence of commentaries for items in an electronic 180 South Bridge, Edinburgh EH1 1HN, Email: {chrism,miCko}@dai.ecl.ac.uk. 2 Buccleuch Place, Edinburgh EH8 9LW, Email: {alik, jon}@cogsci.ed, ac.uk", "title": "" }, { "docid": "d8b8fa014fc0db066f8bb9b624f31d25", "text": "XCSF is a rule-based on-line learning system that makes use of local learning concepts in conjunction with gradient-based approximation techniques. It is mainly used to learn functions, or rather regression problems, by means of dividing the problem space into smaller subspaces and approximate the function values linearly therein. In this paper, we show how local interpolation can be incorporated to improve the approximation speed and thus to decrease the system error. We describe how a novel interpolation component integrates into the algorithmic structure of XCSF and thereby augments the well-established separation into the performance, discovery and reinforcement component. To underpin the validity of our approach, we present and discuss results from experiments on three test functions of different complexity, i.e. we show that by means of the proposed strategies for integrating the locally interpolated values, the overall performance of XCSF can be improved.", "title": "" }, { "docid": "a10752bb80ad47e18ef7dbcd83d49ff7", "text": "Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier.", "title": "" }, { "docid": "8518dc45e3b0accfc551111489842359", "text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.", "title": "" }, { "docid": "41b92e3e2941175cf6d80bf809d7bd32", "text": "Automated citation analysis (ACA) can be important for many applications including author ranking and literature based information retrieval, extraction, summarization and question answering. In this study, we developed a new compositional attention network (CAN) model to integrate local and global attention representations with a hierarchical attention mechanism. Training on a new benchmark corpus we built, our evaluation shows that the CAN model performs consistently well on both citation classification and sentiment analysis tasks.", "title": "" }, { "docid": "453191a57a9282248b0d5b8a85fa4ce0", "text": "The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.", "title": "" }, { "docid": "0d0f6e946bd9125f87a78d8cf137ba97", "text": "Acute renal failure increases risk of death after cardiac surgery. However, it is not known whether more subtle changes in renal function might have an impact on outcome. Thus, the association between small serum creatinine changes after surgery and mortality, independent of other established perioperative risk indicators, was analyzed. In a prospective cohort study in 4118 patients who underwent cardiac and thoracic aortic surgery, the effect of changes in serum creatinine within 48 h postoperatively on 30-d mortality was analyzed. Cox regression was used to correct for various established demographic preoperative risk indicators, intraoperative parameters, and postoperative complications. In the 2441 patients in whom serum creatinine decreased, early mortality was 2.6% in contrast to 8.9% in patients with increased postoperative serum creatinine values. Patients with large decreases (DeltaCrea <-0.3 mg/dl) showed a progressively increasing 30-d mortality (16 of 199 [8%]). Mortality was lowest (47 of 2195 [2.1%]) in patients in whom serum creatinine decreased to a maximum of -0.3 mg/dl; mortality increased to 6% in patients in whom serum creatinine remained unchanged or increased up to 0.5 mg/dl. Mortality (65 of 200 [32.5%]) was highest in patients in whom creatinine increased > or =0.5 mg/dl. For all groups, increases in mortality remained significant in multivariate analyses, including postoperative renal replacement therapy. After cardiac and thoracic aortic surgery, 30-d mortality was lowest in patients with a slight postoperative decrease in serum creatinine. Any even minimal increase or profound decrease of serum creatinine was associated with a substantial decrease in survival.", "title": "" }, { "docid": "6bdeee1b2dd8a9502558c12dcd270ff6", "text": "In this work, we describe our experiences in developing cloud forensics tools and use them to support three main points: First, we make the argument that cloud forensics is a qualitatively different problem. In the context of SaaS, it is incompatible with long-established acquisition and analysis techniques, and requires a new approach and forensic toolset. We show that client-side techniques, which are an extension of methods used over the last three decades, have inherent limitations that can only be overcome by working directly with the interfaces provided by cloud service providers. Second, we present our results in building forensic tools in the form of three case studies: kumoddea tool for cloud drive acquisition, kumodocsea tool for Google Docs acquisition and analysis, and kumofsea tool for remote preview and screening of cloud drive data. We show that these tools, which work with the public and private APIs of the respective services, provide new capabilities that cannot be achieved by examining client-side", "title": "" }, { "docid": "878617f145544f66e79f7d2d3404cbdf", "text": "In this paper we address the problem of classifying cited work into important and non-important to the developments presented in a research publication. This task is vital for the algorithmic techniques that detect and follow emerging research topics and to qualitatively measure the impact of publications in increasingly growing scholarly big data. We consider cited work as important to a publication if that work is used or extended in some way. If a reference is cited as background work or for the purpose of comparing results, the cited work is considered to be non-important. By employing five classification techniques (Support Vector Machine, Naïve Bayes, Decision Tree, K-Nearest Neighbors and Random Forest) on an annotated dataset of 465 citations, we explore the effectiveness of eight previously published features and six novel features (including context based, cue words based and textual based). Within this set, our new features are among the best performing. Using the Random Forest classifier we achieve an overall classification accuracy of 0.91 AUC.", "title": "" }, { "docid": "368c769f4427c213c68d1b1d7a0e4ca9", "text": "The goal of this paper is to perform 3D object detection in the context of autonomous driving. Our method aims at generating a set of high-quality 3D object proposals by exploiting stereo imagery. We formulate the problem as minimizing an energy function that encodes object size priors, placement of objects on the ground plane as well as several depth informed features that reason about free space, point cloud densities and distance to the ground. We then exploit a CNN on top of these proposals to perform object detection. In particular, we employ a convolutional neural net (CNN) that exploits context and depth information to jointly regress to 3D bounding box coordinates and object pose. Our experiments show significant performance gains over existing RGB and RGB-D object proposal methods on the challenging KITTI benchmark. When combined with the CNN, our approach outperforms all existing results in object detection and orientation estimation tasks for all three KITTI object classes. Furthermore, we experiment also with the setting where LIDAR information is available, and show that using both LIDAR and stereo leads to the best result.", "title": "" }, { "docid": "2e35483beb568ab514601ba21d70c2d3", "text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.", "title": "" }, { "docid": "566b4dbea724fc852264b70ce6cae0df", "text": "On the basis of self-regulation theories, the authors develop an affective shift model of work engagement according to which work engagement emerges from the dynamic interplay of positive and negative affect. The affective shift model posits that negative affect is positively related to work engagement if negative affect is followed by positive affect. The authors applied experience sampling methodology to test the model. Data on affective events, mood, and work engagement was collected twice a day over 9 working days among 55 software developers. In support of the affective shift model, negative mood and negative events experienced in the morning of a working day were positively related to work engagement in the afternoon if positive mood in the time interval between morning and afternoon was high. Individual differences in positive affectivity moderated within-person relationships. The authors discuss how work engagement can be fostered through affect regulation.", "title": "" }, { "docid": "b9bc1b10d144e6680de682273dbced00", "text": "We propose a new and, arguably, a very simple reduction of instance segmentation to semantic segmentation. This reduction allows to train feed-forward non-recurrent deep instance segmentation systems in an end-to-end fashion using architectures that have been proposed for semantic segmentation. Our approach proceeds by introducing a fixed number of labels (colors) and then dynamically assigning object instances to those labels during training (coloring). A standard semantic segmentation objective is then used to train a network that can color previously unseen images. At test time, individual object instances can be recovered from the output of the trained convolutional network using simple connected component analysis. In the experimental validation, the coloring approach is shown to be capable of solving diverse instance segmentation tasks arising in autonomous driving (the Cityscapes benchmark), plant phenotyping (the CVPPP leaf segmentation challenge), and high-throughput microscopy image analysis. The source code is publicly available: https://github.com/kulikovv/DeepColoring.", "title": "" }, { "docid": "825bbc624a8e7a8a405b4c453b9f681d", "text": "For enterprise systems running on public clouds in which the servers are outside the control domain of the enterprise, access control that was traditionally executed by reference monitors deployed on the system servers can no longer be trusted. Hence, a self-contained security scheme is regarded as an effective way for protecting outsourced data. However, building such a scheme that can implement the access control policy of the enterprise has become an important challenge. In this paper, we propose a self-contained data protection mechanism called RBAC-CPABE by integrating role-based access control (RBAC), which is widely employed in enterprise systems, with the ciphertext-policy attribute-based encryption (CP-ABE). First, we present a data-centric RBAC (DC-RBAC) model that supports the specification of fine-grained access policy for each data object to enhance RBAC’s access control capabilities. Then, we fuse DC-RBAC and CP-ABE by expressing DC-RBAC policies with the CP-ABE access tree and encrypt data using CP-ABE. Because CP-ABE enforces both access control and decryption, access authorization can be achieved by the data itself. A security analysis and experimental results indicate that RBAC-CPABE maintains the security and efficiency properties of the CP-ABE scheme on which it is based, but substantially improves the access control capability. Finally, we present an implemented framework for RBAC-CPABE to protect privacy and enforce access control for data stored in the cloud.", "title": "" }, { "docid": "8ead9a0e083a65ef5cb5b3f7e9aea5be", "text": "In this paper, a new resonant gate-drive circuit is proposed to recover a portion of the power-MOSFET-gate energy that is typically dissipated in high-frequency converters. The proposed circuit consists of four control switches and a small resonant inductance. The current through the resonant inductance is discontinuous in order to minimize circulating-current conduction loss that is present in other methods. The proposed circuit also achieves quick turn-on and turn-off transition times to reduce switching and conduction losses in power MOSFETs. An analysis, a design procedure, and experimental results are presented for the proposed circuit. Experimental results demonstrate that the proposed driver can recover 51% of the gate energy at 5-V gate-drive voltage.", "title": "" }, { "docid": "d5eb643385b573706c48cbb2cb3262df", "text": "This article identifies problems and conditions that contribute to nipple pain during lactation and that may lead to early cessation or noninitiation of breastfeeding. Signs and symptoms of poor latch-on and positioning, oral anomalies, and suckling disorders are reviewed. Diagnosis and treatment of infectious agents that may cause nipple pain are presented. Comfort measures for sore nipples and current treatment recommendations for nipple wound healing are discussed. Suggestions are made for incorporating in-depth breastfeeding content into midwifery education programs.", "title": "" }, { "docid": "7256d6c5bebac110734275d2f985ab31", "text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.", "title": "" }, { "docid": "80f31015c604b95e6682908717e90d44", "text": "ed from specific role-abstraction levels would enable the role-assignment algorithm to incorporate relevant state attributes as rules in the assignment of roles to nodes. It would also allow roles to control or tune to the desired behavior in response to undesirable local node/network events. This is known as role load balancing and it is pursued as role reassignment to repair role failures. We will discuss role failures and role load balancing later in this section. 4.4.1 URAF architecture overview Figure 4.11 shows the high level design architecture of the unified role-abstraction framework (URAF) in conjunction with a middleware (RBMW) that maps application specified services and expected QoS onto an ad hoc wireless sensor network with heterogeneous node capabilities. The design of the framework is modular such that each module provides higher levels of network abstractions to the modules directly interfaced with it. For example, at the lowest level, we have API’s that interface directly with the physical hardware. The resource usage and accounting module maintains up-to-date information on node and neighbor resource specifications and their availability. As discussed earlier, complex roles are composed of elementary roles and these are executed as tasks on the node. The state of the role execution at any point in time is cached by the task status table for that complex role. At the next higher abstraction, we calculate and maintain the overall role execution time and the energy dissipated by the node in that time. The available energy is thus calculated and cross checked against remaining battery capacity. There is another table that measures and maintains the failure/success of a role for every service schedule or period. This is used to calculate the load imposed by the service at different time intervals.", "title": "" }, { "docid": "23f91ffdd3c15fdeeb3ef33ca463c238", "text": "The Shield project relied on application protocol analyzers to detect potential exploits of application vulnerabilities. We present the design of a second-generation generic application-level protocol analyzer (GAPA) that encompasses a domain-specific language and the associated run-time. We designed GAPA to satisfy three important goals: safety, real-time analysis and response, and rapid development of analyzers. We have found that these goals are relevant for many network monitors that implement protocol analysis. Therefore, we built GAPA to be readily integrated into tools such as Ethereal as well as Shield. GAPA preserves safety through the use of a memorysafe language for both message parsing and analysis, and through various techniques to reduce the amount of state maintained in order to avoid denial-of-service attacks. To support online analysis, the GAPA runtime uses a streamprocessing model with incremental parsing. In order to speed protocol development, GAPA uses a syntax similar to many protocol RFCs and other specifications, and incorporates many common protocol analysis tasks as built-in abstractions. We have specified 10 commonly used protocols in the GAPA language and found it expressive and easy to use. We measured our GAPA prototype and found that it can handle an enterprise client HTTP workload at up to 60 Mbps, sufficient performance for many end-host firewall/IDS scenarios. At the same time, the trusted code base of GAPA is an order of magnitude smaller than Ethereal.", "title": "" }, { "docid": "cd18d1e77af0e2146b67b028f1860ff0", "text": "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.", "title": "" } ]
scidocsrr
d33e93a153dd2432237d19155e8f85b0
Effective Gaussian mixture learning for video background subtraction
[ { "docid": "6851e4355ab4825b0eb27ac76be2329f", "text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.", "title": "" } ]
[ { "docid": "6055957e5f48c5f82afcfa641176b759", "text": "This article presents the design of a low cost fully active phased array antenna with specific emphasis on the realization of an elementary radiating cell. The phased array antenna is designed for mobile satellite services and dedicated for automotive applications. Details on the radiating element design as well as its implementation in a multilayer's build-up are presented and discussed. Results of the measurements and characterization of the elementary radiating cell are also presented and discussed. An outlook of the next steps in the antenna realization concludes this paper.", "title": "" }, { "docid": "cf97c276a503968d849f45f4d1614bfd", "text": "Social network platforms can archive data produced by their users. Then, the archived data is used to provide better services to the users. One of the services that these platforms provide is the recommendation service. Recommendation systems can predict the future preferences of users using various different techniques. One of the most popular technique for recommendation is matrix-factorization, which uses lowrank approximation of input data. Similarly, word embedding methods from natural language processing literature learn lowdimensional vector space representation of input elements. Noticing the similarities among word embedding and matrix factorization techniques and based on the previous works that apply techniques from text processing to recommendation, Word2Vec’s skip-gram technique is employed to make recommendations. The aim of this work is to make recommendation on next check-in venues. Unlike previous works that use Word2Vec for recommendation, in this work non-textual features are used. For the experiments, a Foursquare check-in dataset is used. The results show that use of vector space representations of items modeled by skip-gram technique is promising for making recommendations. Keywords—Recommendation systems, Location based social networks, Word embedding, Word2Vec, Skip-gram technique", "title": "" }, { "docid": "5b110a3e51de3489168e7edca81b5f3e", "text": "This paper is a review of research in product development, which we define as the transformation of a market opportunity into a product available for sale. Our review is broad, encompassing work in the academic fields of marketing, operations management, and engineering design. The value of this breadth is in conveying the shape of the entire research landscape. We focus on product development projects within a single firm. We also devote our attention to the development of physical goods, although much of the work we describe applies to products of all kinds. We look inside the “black box” of product development at the fundamental decisions that are made by intention or default. In doing so, we adopt the perspective of product development as a deliberate business process involving hundreds of decisions, many of which can be usefully supported by knowledge and tools. We contrast this approach to prior reviews of the literature, which tend to examine the importance of environmental and contextual variables, such as market growth rate, the competitive environment, or the level of top-management support. (Product Development Decisions; Survey; Literature Review)", "title": "" }, { "docid": "adac9cbc59aea46821aaebad3bcc1772", "text": "Multidetector computed tomography (MDCT) has emerged as an effective imaging technique to augment forensic autopsy. Postmortem change and decomposition are always present at autopsy and on postmortem MDCT because they begin to occur immediately upon death. Consequently, postmortem change and decomposition on postmortem MDCT should be recognized and not mistaken for a pathologic process or injury. Livor mortis increases the attenuation of vasculature and dependent tissues on MDCT. It may also produce a hematocrit effect with fluid levels in the large caliber blood vessels and cardiac chambers from dependent layering erythrocytes. Rigor mortis and algor mortis have no specific MDCT features. In contrast, decomposition through autolysis, putrefaction, and insect and animal predation produce dramatic alterations in the appearance of the body on MDCT. Autolysis alters the attenuation of organs. The most dramatic autolytic changes on MDCT are seen in the brain where cerebral sulci and ventricles are effaced and gray-white matter differentiation is lost almost immediately after death. Putrefaction produces a pattern of gas that begins with intravascular gas and proceeds to gaseous distension of all anatomic spaces, organs, and soft tissues. Knowledge of the spectrum of postmortem change and decomposition is an important component of postmortem MDCT interpretation.", "title": "" }, { "docid": "49c1924821c326f803cefff58ca7ab67", "text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.", "title": "" }, { "docid": "699f4b29e480d89b158326ec4c778f7b", "text": "Much attention is currently being paid in both the academic and practitioner literatures to the value that organisations could create through the use of big data and business analytics (Gillon et al, 2012; Mithas et al, 2013). For instance, Chen et al (2012, p. 1166–1168) suggest that business analytics and related technologies can help organisations to ‘better understand its business and markets’ and ‘leverage opportunities presented by abundant data and domain-specific analytics’. Similarly, LaValle et al (2011, p. 22) report that topperforming organisations ‘make decisions based on rigorous analysis at more than double the rate of lower performing organisations’ and that in such organisations analytic insight is being used to ‘guide both future strategies and day-to-day operations’. We argue here that while there is some evidence that investments in business analytics can create value, the thesis that ‘business analytics leads to value’ needs deeper analysis. In particular, we argue here that the roles of organisational decision-making processes, including resource allocation processes and resource orchestration processes (Helfat et al, 2007; Teece, 2009), need to be better understood in order to understand how organisations can create value from the use of business analytics. Specifically, we propose that the firstorder effects of business analytics are likely to be on decision-making processes and that improvements in organisational performance are likely to be an outcome of superior decision-making processes enabled by business analytics. This paper is set out as follows. Below, we identify prior research traditions in the Information Systems (IS) literature that discuss the potential of data and analytics to create value. This is to put into perspective the current excitement around ‘analytics’ and ‘big data’, and to position those topics within prior research traditions. We then draw on a number of existing literatures to develop a research agenda to understand the relationship between business analytics, decision-making processes and organisational performance. Finally, we discuss how the three papers in this Special Issue advance the research agenda. Disciplines Engineering | Science and Technology Studies Publication Details Sharma, R., Mithas, S. and Kankanhalli, A. (2014). Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations. European Journal of Information Systems, 23 (4), 433-441. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/3231 EJISEditorialFinal 16 May 2014 RS.docx 1 of 17", "title": "" }, { "docid": "e13fc2c9f5aafc6c8eb1909592c07a70", "text": "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].", "title": "" }, { "docid": "a4c8e2938b976a37f38efc1ce5bc6286", "text": "As a classic statistical model of 3D facial shape and texture, 3D Morphable Model (3DMM) is widely used in facial analysis, e.g., model fitting, image synthesis. Conventional 3DMM is learned from a set of well-controlled 2D face images with associated 3D face scans, and represented by two sets of PCA basis functions. Due to the type and amount of training data, as well as the linear bases, the representation power of 3DMM can be limited. To address these problems, this paper proposes an innovative framework to learn a nonlinear 3DMM model from a large set of unconstrained face images, without collecting 3D face scans. Specifically, given a face image as input, a network encoder estimates the projection, shape and texture parameters. Two decoders serve as the nonlinear 3DMM to map from the shape and texture parameters to the 3D shape and texture, respectively. With the projection parameter, 3D shape, and texture, a novel analytically-differentiable rendering layer is designed to reconstruct the original input face. The entire network is end-to-end trainable with only weak supervision. We demonstrate the superior representation power of our nonlinear 3DMM over its linear counterpart, and its contribution to face alignment and 3D reconstruction.", "title": "" }, { "docid": "2ba69997f51aa61ffeccce33b2e69054", "text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.", "title": "" }, { "docid": "0b17e1cbfa3452ba2ff7c00f4e137aef", "text": "Brain-computer interfaces (BCIs) promise to provide a novel access channel for assistive technologies, including augmentative and alternative communication (AAC) systems, to people with severe speech and physical impairments (SSPI). Research on the subject has been accelerating significantly in the last decade and the research community took great strides toward making BCI-AAC a practical reality to individuals with SSPI. Nevertheless, the end goal has still not been reached and there is much work to be done to produce real-world-worthy systems that can be comfortably, conveniently, and reliably used by individuals with SSPI with help from their families and care givers who will need to maintain, setup, and debug the systems at home. This paper reviews reports in the BCI field that aim at AAC as the application domain with a consideration on both technical and clinical aspects.", "title": "" }, { "docid": "ee0c8eafd5804b215b34a443d95259d4", "text": "Fog computing has emerged as a promising technology that can bring the cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, and how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud,” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes as building blocks of fog computing, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and show how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.", "title": "" }, { "docid": "5aebbb08b705d98dbde9d3efe4affdf8", "text": "The benefit of localized features within the regular domain has given rise to the use of Convolutional Neural Networks (CNNs) in machine learning, with great proficiency in the image classification. The use of CNNs becomes problematic within the irregular spatial domain due to design and convolution of a kernel filter being non-trivial. One solution to this problem is to utilize graph signal processing techniques and the convolution theorem to perform convolutions on the graph of the irregular domain to obtain feature map responses to learnt filters. We propose graph convolution and pooling operators analogous to those in the regular domain. We also provide gradient calculations on the input data and spectral filters, which allow for the deep learning of an irregular spatial domain problem. Signal filters take the form of spectral multipliers, applying convolution in the graph spectral domain. Applying smooth multipliers results in localized convolutions in the spatial domain, with smoother multipliers providing sharper feature maps. Algebraic Multigrid is presented as a graph pooling method, reducing the resolution of the graph through agglomeration of nodes between layers of the network. Evaluation of performance on the MNIST digit classification problem in both the regular and irregular domain is presented, with comparison drawn to standard CNN. The proposed graph CNN provides a deep learning method for the irregular domains present in the machine learning community, obtaining 94.23% on the regular grid, and 94.96% on a spatially irregular subsampled MNIST.", "title": "" }, { "docid": "c57d9c4f62606e8fccef34ddd22edaec", "text": "Based on research into learning programming and a review of program visualization research, we designed an educational software tool that aims to target students' apparent fragile knowledge of elementary programming which manifests as difficulties in tracing and writing even simple programs. Most existing tools build on a single supporting technology and focus on one aspect of learning. For example, visualization tools support the development of a conceptual-level understanding of how programs work, and automatic assessment tools give feedback on submitted tasks. We implemented a combined tool that closely integrates programming tasks with visualizations of program execution and thus lets students practice writing code and more easily transition to visually tracing it in order to locate programming errors. In this paper we present Jype, a web-based tool that provides an environment for visualizing the line-by-line execution of Python programs and for solving programming exercises with support for immediate automatic feedback and an integrated visual debugger. Moreover, the debugger allows stepping back in the visualization of the execution as if executing in reverse. Jype is built for Python, when most research in programming education support tools revolves around Java.", "title": "" }, { "docid": "e514e3fc0359332343e99fc95a0eda6f", "text": "AIM\nTo evaluate the efficacy of the rehabilitation protocol on patients with lumbar degenerative disc disease after posterior transpedicular dynamic stabilization (PTDS) surgery.\n\n\nMATERIAL AND METHODS\nPatients (n=50) with single level lumbar degenerative disc disease were recruited for this study. Patients had PTDS surgery with hinged screws. A rehabilitation program was applied for all patients. Phase 1 was the preoperative evaluation phase. Phase 2 (active rest phase) was the first 6 weeks after surgery. During phase 3 (minimal movement phase, 6-12 weeks) pelvic tilt exercises initiated. In phase 4 (dynamic phase, 3-6 months) dynamic lumbar stabilization exercises were started. Phase 5 (return to sports phase) began after the 6th month. The primary outcome criteria were the Visual Analogue Pain Score (VAS) and the Oswestry Disability Index (ODI). Patients were evaluated preoperatively, postoperative 3rd, 12th and 24th months.\n\n\nRESULTS\nThe mean preoperative VAS and ODI scores were 7.52±0.97 and 60.96±8.74, respectively. During the 3rd month, VAS and ODI scores decreased to 2.62±1.05 and 26.2±7.93, respectively. VAS and ODI scores continued to decrease during the 12th month after surgery to 1.4±0.81 and 13.72±6.68, respectively. At the last follow-up (mean 34.1 months) the VAS and ODI scores were found to be 0.68±0.62 and 7.88±3.32, respectively. (p=0.0001).\n\n\nCONCLUSION\nThe protocol was designed for a postoperative rehabilitation program after PTDS surgery for patients with lumbar degenerative disc disease. The good outcomes are the result of a combination of very careful and restrictive patient selection, surgical technique, and the presented rehabilitation program.", "title": "" }, { "docid": "fce8f5ee730e2bbb63f4d1ef003ce830", "text": "In this paper, we introduce an approach for constructing uncertainty sets for robust optimization using new deviation measures for random variables termed the forward and backward deviations. These deviation measures capture distributional asymmetry and lead to better approximations of chance constraints. Using a linear decision rule, we also propose a tractable approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. An attractive feature of the framework is that we convert the original model into a second-order cone program, which is computationally tractable both in theory and in practice. We demonstrate the framework through an application of a project management problem with uncertain activity completion time.", "title": "" }, { "docid": "774df4733d98b781f32222cf843ec381", "text": "This paper deals with the unsupervised domain adaptation problem, where one wants to estimate a prediction function f in a given target domain without any labeled sample by exploiting the knowledge available from a source domain where labels are known. Our work makes the following assumption: there exists a nonlinear transformation between the joint feature/label space distributions of the two domain Ps and Pt that can be estimated with optimal transport. We propose a solution of this problem that allows to recover an estimated target P t = (X, f(X)) by optimizing simultaneously the optimal coupling and f . We show that our method corresponds to the minimization of a bound on the target error, and provide an efficient algorithmic solution, for which convergence is proved. The versatility of our approach, both in terms of class of hypothesis or loss functions is demonstrated with real world classification and regression problems, for which we reach or surpass state-of-the-art results.", "title": "" }, { "docid": "6c8c21e7cc5a9cc88fa558d7917a81b2", "text": "Recent engineering experiences with the Missile Defense Agency (MDA) Ballistic Missile Defense System (BMDS) highlight the need to analyze the BMDS System of Systems (SoS) including the numerous potential interactions between independently developed elements of the system. The term “interstitials” is used to define the domain of interfaces, interoperability, and integration between constituent systems in an SoS. The authors feel that this domain, at an SoS level, has received insufficient attention within systems engineering literature. The BMDS represents a challenging SoS case study as many of its initial elements were assembled from existing programs of record. The elements tend to perform as designed but their performance measures may not be consistent with the higher level SoS requirements. One of the BMDS challenges is interoperability, to focus the independent elements to interact in a number of ways, either subtle or overt, for a predictable and sustainable national capability. New capabilities desired by national leadership may involve modifications to kill chains, Command and Control (C2) constructs, improved coordination, and performance. These capabilities must be realized through modifications to programs of record and integration across elements of the system that have their own independent programmatic momentum. A challenge of SoS Engineering is to objectively evaluate competing solutions and assess the technical viability of tradeoff options. This paper will present a multifaceted technical approach for integrating a complex, adaptive SoS to achieve a functional capability. Architectural frameworks will be explored, a mathematical technique utilizing graph theory will be introduced, adjuncts to more traditional modeling and simulation techniques such as agent based modeling will be explored, and, finally, newly developed technical and managerial metrics to describe design maturity will be introduced. A theater BMDS construct will be used as a representative set of elements together with the *Author to whom all correspondence should be addressed (e-mail: DLGR_NSWC_G25@navy.mil; DLGR_NSWC_K@Navy.mil; DLGR_NSWC_W@navy.mil; DLGR_NSWC_W@Navy.mil). †Commanding Officer, 6149 Welsh Road, Suite 203, Dahlgren, VA 22448-5130", "title": "" }, { "docid": "9a27c676b5d356d5feb91850e975a336", "text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.", "title": "" }, { "docid": "7564ec31bb4e81cc6f8bd9b2b262f5ca", "text": "Traditional methods to calculate CRC suffer from diminishing returns. Doubling the data width doesn't double the maximum data throughput, the worst case timing path becomes slower. Feedback in the traditional implementation makes pipelining problematic. However, the on chip data width used for high throughput protocols is constantly increasing. The battle of reducing static power consumption is one factor driving this trend towards wider data paths. This paper discusses a method for pipelining the calculation of CRC's, such as ISO-3309 CRC32. This method allows independent scaling of circuit frequency and data throughput by varying the data width and the number of pipeline stages. Pipeline latency can be traded for area while slightly affecting timing. Additionally it allows calculation over data that isn't the full width of the input. This often happens at the end of the packet, although it could happen in the middle of the packet if data arrival is bursty. Finally, a fortunate side effect is that it offers the ability to efficiently update a known good CRC value where a small subset of data in the packet has changed. This is a function often desired in routers, for example updating the TTL field in IPv4 packets.", "title": "" } ]
scidocsrr
9949b673c84b955c4039d71dfc4ad3ac
Streaming trend detection in Twitter
[ { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "8732cabe1c2dc0e8587b1a7e03039ef0", "text": "With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. \n In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies <i>event threading</i>. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\n We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.", "title": "" } ]
[ { "docid": "da1cecae4f925f331fda67c784e6635d", "text": "This paper surveys recent literature on vehicular social networks that are a particular class of vehicular ad hoc networks, characterized by social aspects and features. Starting from this pillar, we investigate perspectives on next-generation vehicles under the assumption of social networking for vehicular applications (i.e., safety and entertainment applications). This paper plays a role as a starting point about socially inspired vehicles and mainly related applications, as well as communication techniques. Vehicular communications can be considered the “first social network for automobiles” since each driver can share data with other neighbors. For instance, heavy traffic is a common occurrence in some areas on the roads (e.g., at intersections, taxi loading/unloading areas, and so on); as a consequence, roads become a popular social place for vehicles to connect to each other. Human factors are then involved in vehicular ad hoc networks, not only due to the safety-related applications but also for entertainment purposes. Social characteristics and human behavior largely impact on vehicular ad hoc networks, and this arises to the vehicular social networks, which are formed when vehicles (individuals) “socialize” and share common interests. In this paper, we provide a survey on main features of vehicular social networks, from novel emerging technologies to social aspects used for mobile applications, as well as main issues and challenges. Vehicular social networks are described as decentralized opportunistic communication networks formed among vehicles. They exploit mobility aspects, and basics of traditional social networks, in order to create novel approaches of message exchange through the detection of dynamic social structures. An overview of the main state-of-the-art on safety and entertainment applications relying on social networking solutions is also provided.", "title": "" }, { "docid": "a15275cc08ad7140e6dd0039e301dfce", "text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.", "title": "" }, { "docid": "006ea5f44521c42ec513edc1cbff1c43", "text": "In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "0e30a01870bbbf32482b5ac346607afc", "text": "Hypothyroidism is the pathological condition in which the level of thyroid hormones declines to the deficiency state. This communication address the therapies employed for the management of hypothyroidism as per the Ayurvedic and modern therapeutic perspectives on the basis scientific papers collected from accepted scientific basis like Google, Google Scholar, PubMed, Science Direct, using various keywords. The Ayurveda describe hypothyroidism as the state of imbalance of Tridoshas and suggest the treatment via use of herbal plant extracts, life style modifications like practicing yoga and various dietary supplements. The modern medicine practice define hypothyroidism as the disease state originated due to formation of antibodies against thyroid gland and hormonal imbalance and incorporate the use of hormone replacement i.e. Levothyroxine, antioxidants. Various plants like Crataeva nurvula and dietary supplements like Capsaicin, Forskolin, Echinacea, Ginseng and Bladderwrack can serve as a potential area of research as thyrotropic agents.", "title": "" }, { "docid": "545064c02ed0ca14c53b3d083ff84eac", "text": "We present a novel polarization imaging sensor by monolithically integrating aluminum nanowire optical filters with an array of CCD imaging elements. The CCD polarization image sensor is composed of 1000 by 1000 imaging elements with 7.4m pixel pitch. The image sensor has a dynamic range of 65dB and signal-to-noise ratio of 45dB. The CCD array is covered with an array of pixel-pitch matched nanowire polarization filters with four different orientations offset by 45. The complete imaging sensor is used for real-time reconstruction of the shape of various objects.", "title": "" }, { "docid": "07905317dcdbcf1332fd57ffaa02f8d3", "text": "Motivation\nIdentifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters.\n\n\nResults\nHere, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs.\n\n\nAvailability and Implementation\nThe network specifications and solver definitions are provided in Supplementary Software 1.\n\n\nContact\nwilliam_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "ca9a7a1f7be7d494f6c0e3e4bb408a95", "text": "An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation.", "title": "" }, { "docid": "ea0b23e9c37fa35da9ff6d9091bbee5e", "text": "Since the invention of the wheel, Man has sought to reduce effort to get things done easily. Ultimately, it has resulted in the invention of the Robot, an Engineering Marvel. Up until now, the biggest factor that hampers wide proliferation of robots is locomotion and maneuverability. They are not dynamic enough to conform even to the most commonplace terrain such as stairs. To overcome this, we are proposing a stair climbing robot that looks a lot like the human leg and can adjust itself according to the height of the step. But, we are currently developing a unit to carry payload of about 4 Kg. The automatic adjustment in the robot according to the height of the stair is done by connecting an Android device that has an application programmed in OpenCV with an Arduino in Host mode. The Android Device uses it camera to calculate the height of the stair and sends it to the Arduino for further calculation. This design employs an Arduino Mega ADK 2560 board to control the robot and other home fabricated custom PCB to interface it with the Arduino Board. The bot is powered by Li-Ion batteries and Servo motors.", "title": "" }, { "docid": "9a3a73f35b27d751f237365cc34c8b28", "text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.", "title": "" }, { "docid": "4721173eea1997316b8c9eca8b4a8d05", "text": "Conventional centralized cloud computing is a success for benefits such as on-demand, elasticity, and high colocation of data and computation. However, the paradigm shift towards “Internet of things” (IoT) will pose some unavoidable challenges: (1) massive data volume impossible for centralized datacenters to handle; (2) high latency between edge “things” and centralized datacenters; (3) monopoly, inhibition of innovations, and non-portable applications due to the proprietary application delivery in centralized cloud. The emergence of edge cloud gives hope to address these challenges. In this paper, we propose a new framework called “HomeCloud” focusing on an open and efficient new application delivery in edge cloud integrating two complementary technologies: Network Function Virtualization (NFV) and Software-Defined Networking (SDN). We also present a preliminary proof-of-concept testbed demonstrating the whole process of delivering a simple multi-party chatting application in the edge cloud. In the future, the HomeCloud framework can be further extended to support other use cases that demand portability, cost-efficiency, scalability, flexibility, and manageability. To the best of our knowledge, this framework is the first effort aiming at facilitating new application delivery in such a new edge cloud context.", "title": "" }, { "docid": "e630891703d4a4e6e65fea11698f24c7", "text": "In spite of meticulous planning, well documentation and proper process control during software development, occurrences of certain defects are inevitable. These software defects may lead to degradation of the quality which might be the underlying cause of failure. In today‟s cutting edge competition it‟s necessary to make conscious efforts to control and minimize defects in software engineering. However, these efforts cost money, time and resources. This paper identifies causative factors which in turn suggest the remedies to improve software quality and productivity. The paper also showcases on how the various defect prediction models are implemented resulting in reduced magnitude of defects.", "title": "" }, { "docid": "c5ecfcebbbd577a0bc14ccb4613a98ac", "text": "When Jean-Dominique Bauby suffered from a cortico-subcortical stroke that led to complete paralysis with totally intact sensory and cognitive functions, he described his experience in The Diving-Bell and the Butterfly as “something like a giant invisible diving-bell holds my whole body prisoner”. This horrifying condition also occurs as a consequence of a progressive neurological disease, amyotrophic lateral sclerosis, which involves progressive degeneration of all the motor neurons of the somatic motor system. These ‘locked-in’ patients ultimately become unable to express themselves and to communicate even their most basic wishes or desires, as they can no longer control their muscles to activate communication devices. We have developed a new means of communication for the completely paralysed that uses slow cortical potentials (SCPs) of the electro-encephalogram to drive an electronic spelling device.", "title": "" }, { "docid": "9fdecc8854f539ddf7061c304616130b", "text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.", "title": "" }, { "docid": "5b507508fd3b3808d61e822d2a91eab9", "text": "In this brief, we propose a stand-alone system-on-a-programmable-chip (SOPC)-based cloud system to accelerate massive electrocardiogram (ECG) data analysis. The proposed system tightly couples network I/O handling hardware to data processing pipelines in a single field-programmable gate array (FPGA), offloading both networking operations and ECG data analysis. In this system, we first propose a massive-sessions optimized TCP/IP hardware stack using a macropipeline architecture to accelerate network packet processing. Second, we propose a streaming architecture to accelerate ECG signal processing, including QRS detection, feature extraction, and classification. We verify our design on XC6VLX550T FPGA using real ECG data. Compared to commercial servers, our system shows up to 38× improvement in performance and 142× improvement in energy efficiency.", "title": "" }, { "docid": "cf94d312bb426e64e364dfa33b09efeb", "text": "The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.", "title": "" }, { "docid": "986bd4907d512402a188759b5bdef513", "text": "► We consider a case of laparoscopic aortic lymphadenectomy for an early ovarian cancer including a comprehensive surgical staging. ► The patient was found to have a congenital anatomic abnormality: a right renal malrotation with an accessory renal artery. ► We used a preoperative CT angiography study to diagnose such anatomical variations and to adequate the proper surgical technique.", "title": "" } ]
scidocsrr
32b63f6811f973662d2f6e568c5781dd
A Multi-dimensional Comparison of Toolkits for Machine Learning with Big Data
[ { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" } ]
[ { "docid": "af47d1cc068467eaee7b6129682c9ee3", "text": "Diffusion kurtosis imaging (DKI) is gaining rapid adoption in the medical imaging community due to its ability to measure the non-Gaussian property of water diffusion in biological tissues. Compared to traditional diffusion tensor imaging (DTI), DKI can provide additional details about the underlying microstructural characteristics of the neural tissues. It has shown promising results in studies on changes in gray matter and mild traumatic brain injury where DTI is often found to be inadequate. The DKI dataset, which has high-fidelity spatio-angular fields, is difficult to visualize. Glyph-based visualization techniques are commonly used for visualization of DTI datasets; however, due to the rapid changes in orientation, lighting, and occlusion, visually analyzing the much more higher fidelity DKI data is a challenge. In this paper, we provide a systematic way to manage, analyze, and visualize high-fidelity spatio-angular fields from DKI datasets, by using spherical harmonics lighting functions to facilitate insights into the brain microstructure.", "title": "" }, { "docid": "d0e2f8c9c7243f5a67e73faeb78038d1", "text": "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "title": "" }, { "docid": "07cb8967d6d347cbc8dd0645e5c1f4b0", "text": "Obtaining reliable data describing local poverty metrics at a granularity that is informative to policy-makers requires expensive and logistically difficult surveys, particularly in the developing world. Not surprisingly, the poverty stricken regions are also the ones which have a high probability of being a war zone, have poor infrastructure and sometimes have governments that do not cooperate with internationally funded development efforts. We train a CNN on free and publicly available daytime satellite images of the African continent from Landsat 7 to build a model for predicting local economic livelihoods. Only 5% of the satellite images can be associated with labels (which are obtained from DHS Surveys) and thus a semi-supervised approach using a GAN [33], albeit with a more stable-totrain flavor of GANs called the Wasserstein GAN regularized with gradient penalty [15] is used. The method of multitask learning is employed to regularize the network and also create an end-to-end model for the prediction of multiple poverty metrics.", "title": "" }, { "docid": "b7390d19beb199e21dac200f2f7021f3", "text": "In this paper, we propose a workflow and a machine learning model for recognizing handwritten characters on form document. The learning model is based on Convolutional Neural Network (CNN) as a powerful feature extraction and Support Vector Machines (SVM) as a high-end classifier. The proposed method is more efficient than modifying the CNN with complex architecture. We evaluated some SVM and found that the linear SVM using L1 loss function and L2 regularization giving the best performance both of the accuracy rate and the computation time. Based on the experiment results using data from NIST SD 192nd edition both for training and testing, the proposed method which combines CNN and linear SVM using L1 loss function and L2 regularization achieved a recognition rate better than only CNN. The recognition rate achieved by the proposed method are 98.85% on numeral characters, 93.05% on uppercase characters, 86.21% on lowercase characters, and 91.37% on the merger of numeral and uppercase characters. While the original CNN achieves an accuracy rate of 98.30% on numeral characters, 92.33% on uppercase characters, 83.54% on lowercase characters, and 88.32% on the merger of numeral and uppercase characters. The proposed method was also validated by using ten folds cross-validation, and it shows that the proposed method still can improve the accuracy rate. The learning model was used to construct a handwriting recognition system to recognize a more challenging data on form document automatically. The pre-processing, segmentation and character recognition are integrated into one system. The output of the system is converted into an editable text. The system gives an accuracy rate of 83.37% on ten different test form document.", "title": "" }, { "docid": "425270bbfd1290a0692afeea95fa090f", "text": "This paper introduces a bounding gait control algorithm that allows a successful implementation of duty cycle modulation in the MIT Cheetah 2. Instead of controlling leg stiffness to emulate a `springy leg' inspired from the Spring-Loaded-Inverted-Pendulum (SLIP) model, the algorithm prescribes vertical impulse by generating scaled ground reaction forces at each step to achieve the desired stance and total stride duration. Therefore, we can control the duty cycle: the percentage of the stance phase over the entire cycle. By prescribing the required vertical impulse of the ground reaction force at each step, the algorithm can adapt to variable duty cycles attributed to variations in running speed. Following linear momentum conservation law, in order to achieve a limit-cycle gait, the sum of all vertical ground reaction forces must match vertical momentum created by gravity during a cycle. In addition, we added a virtual compliance control in the vertical direction to enhance stability. The stiffness of the virtual compliance is selected based on the eigenvalue analysis of the linearized Poincaré map and the chosen stiffness is 700 N/m, which corresponds to around 12% of the stiffness used in the previous trotting experiments of the MIT Cheetah, where the ground reaction forces are purely caused by the impedance controller with equilibrium point trajectories. This indicates that the virtual compliance control does not significantly contributes to generating ground reaction forces, but to stability. The experimental results show that the algorithm successfully prescribes the duty cycle for stable bounding gaits. This new approach can shed a light on variable speed running control algorithm.", "title": "" }, { "docid": "bc4d41ba58f703da48ff202a9006f4bd", "text": "Today, Smart Home monitoring services have attracted much attention from both academia and industry. However, in the conventional monitoring mechanism the remote camera can not be accessed for remote monitoring anywhere and anytime. Besides, traditional approaches might have the limitation in local storage due to lack of device elasticity. In this paper, we proposed a Cloud-based monitoring framework to implement the remote monitoring services of Smart Home. The main technical issues considered include Data-Cloud storage, Local-Cache mechanism, Media device control, NAT traversal, etc. The implementation shows three use scenarios: (a) operating and controlling video cameras for remote monitoring through mobile devices or sound sensors; (b) streaming live video from cameras and sending captured image to mobile devices; (c) recording videos and images on a cloud computing platform for future playback. This system framework could be extended to other applications of Smart Home.", "title": "" }, { "docid": "e74573560a8da7be758c619ba85202df", "text": "This paper proposes two hybrid connectionist structural acoustical models for robust context independent phone like and word like units for speaker-independent recognition system. Such structure combines strength of Hidden Markov Models (HMM) in modeling stochastic sequences and the non-linear classification capability of Artificial Neural Networks (ANN). Two kinds of Neural Networks (NN) are investigated: Multilayer Perceptron (MLP) and Elman Recurrent Neural Networks (RNN). The hybrid connectionist-HMM systems use discriminatively trained NN to estimate the a posteriori probability distribution among subword units given the acoustic observations. We efficiently tested the performance of the conceived systems using the TIMIT database in clean and noisy environments with two perceptually motivated features: MFCC and PLP. Finally, the robustness of the systems is evaluated by using a new preprocessing stage for denoising based on wavelet transform. A significant improvement in performance is obtained with the proposed method.", "title": "" }, { "docid": "46200c35a82b11d989c111e8398bd554", "text": "A physics-based compact gallium nitride power semiconductor device model is presented in this work, which is the first of its kind. The model derivation is based on the classical drift-diffusion model of carrier transport, which expresses the channel current as a function of device threshold voltage and externally applied electric fields. The model is implemented in the Saber® circuit simulator using the MAST hardware description language. The model allows the user to extract the parameters from the dc I-V and C-V characteristics that are also available in the device datasheets. A commercial 80 V EPC GaN HEMT is used to demonstrate the dynamic validation of the model against the transient device characteristics in a double-pulse test and a boost converter circuit configuration. The simulated versus measured device characteristics show good agreement and validate the model for power electronics design and applications using the next generation of GaN HEMT devices.", "title": "" }, { "docid": "3bb4666a27f6bc961aa820d3f9301560", "text": "The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to be the basis to formally prove this conjecture.", "title": "" }, { "docid": "00b98536f0ecd554442a67fb31f77f4c", "text": "We use a large, nationally-representative sample of working-age adults to demonstrate that personality (as measured by the Big Five) is stable over a four-year period. Average personality changes are small and do not vary substantially across age groups. Intra-individual personality change is generally unrelated to experiencing adverse life events and is unlikely to be economically meaningful. Like other non-cognitive traits, personality can be modeled as a stable input into many economic decisions. JEL classi cation: J3, C18.", "title": "" }, { "docid": "e0a8035f9e61c78a482f2e237f7422c6", "text": "Aims: This paper introduces how substantial decision-making and leadership styles relates with each other. Decision-making styles are connected with leadership practices and institutional arrangements. Study Design: Qualitative research approach was adopted in this study. A semi structure interview was use to elicit data from the participants on both leadership styles and decision-making. Place and Duration of Study: Institute of Education international Islamic University", "title": "" }, { "docid": "d5cc92aad3e7f1024a514ff4e6379c86", "text": "This chapter describes the convergence of two of the most influential technologies in the last decade, namely business intelligence (BI) and the Semantic Web (SW). Business intelligence is used by almost any enterprise to derive important business-critical knowledge from both internal and (increasingly) external data. When using external data, most often found on the Web, the most important issue is knowing the precise semantics of the data. Without this, the results cannot be trusted. Here, Semantic Web technologies come to the rescue, as they allow semantics ranging from very simple to very complex to be specified for any web-available resource. SW technologies do not only support capturing the “passive” semantics, but also support active inference and reasoning on the data. The chapter first presents a motivating running example, followed by an introduction to the relevant SW foundation concepts. The chapter then goes on to survey the use of SW technologies for data integration, including semantic DOI: 10.4018/978-1-61350-038-5.ch014", "title": "" }, { "docid": "b82b5ebf186220f8bdb41b7631fd475d", "text": "Fraudulent activity on the Internet, in particular the practice known as ‘Phishing’, is on the increase. Although a number of technology focussed counter measures have been explored user behaviour remains fundamental to increased online security. Encouraging users to engage in secure online behaviour is difficult with a number of different barriers to change. Guided by a model adapted from health psychology this paper reports on a study designed to encourage secure behaviour online. The study aimed to investigate the effects of education via a training program and the effects of risk level manipulation on subsequent self-reported behaviour online. The training program ‘Anti-Phishing Phil’ informed users of the common types of phishing threats and how to identify them whilst the risk level manipulation randomly allocated participants to either high risk or low risk of becoming a victim of online fraud. Sixty-four participants took part in the study, which comprised of 9 males and 55 females with an age range of 18– 43 years. Participants were randomly allocated to one of four experimental groups. High threat information and/or the provision of phishing education were expected to increase self-reports of secure behaviour. Secure behaviour was measured at three stages, a baseline measure stage, an intention measure stage, and a 7-day follow-up measure stage. The results showed that offering a seemingly tailored risk message increased users’ intentions to act in a secure manner online regardless of whether the risk message indicated they were at high or low risk of fraud. There was no effect of the training programme on secure behaviour in general. The findings are discussed in relation to the model of behaviour change, information provision and the transferability of training. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0f3ce3e7467f9c61e40fca28ccd7f86b", "text": "This paper provides insight into a failure mechanism that impacts a broad range of industrial equipment. Voltage surges have often been blamed for unexplained equipment failure in the field. Extensive voltage monitoring data suggests that voltage sags occur much more frequently than voltage surges, and that current surges that accompany voltage sag recovery may be the actual culprit causing equipment damage. A serious limitation in equipment specification is highlighted, pointing to what is possibly the root-cause for a large percentage of unexplained equipment field failures. This paper also outlines the need for a standard governing the behavior of equipment under voltage sags, and suggests solutions to protect existing equipment in the field.", "title": "" }, { "docid": "ca2258408035374cd4e7d1519e24e187", "text": "In this paper we propose a novel application of Hidden Markov Models to automatic generation of informative headlines for English texts. We propose four decoding parameters to make the headlines appear more like Headlinese, the language of informative newspaper headlines. We also allow for morphological variation in words between headline and story English. Informal and formal evaluations indicate that our approach produces informative headlines, mimicking a Headlinese style generated by humans.", "title": "" }, { "docid": "c7e3fc9562a02818bba80d250241511d", "text": "Convolutional networks trained on large supervised dataset produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weaklylabeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and captions, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity, and learn correspondences between different languages.", "title": "" }, { "docid": "dbf419dabb53f9739a35db14877d2d90", "text": "Investigations in the development of lead-free piezoelectric ceramics have recently claimed properties comparable to that of PZT-based materials. In this work, the dielectric and piezoelectric properties of the various systems were contrasted in relation to their respective Curie temperatures. Though comparable with respect to TC, enhanced properties reported in the K,NaNbO3 family are the result of increased polarizability associated with the Torthor-tetragonal polymorphic phase transition being compositionally shifted downward and not from a morphotropic phase boundary (MPB) as widely reported. As expected, the properties are strongly temperature dependent unlike that observed for MPB systems. Analogous to PZT, enhanced properties are noted for MPB compositions in the Na,BiTiO3-BaTiO3 and the ternary system with K,BiTiO3, but offer properties significantly lower than that of PZTs. The consequence of a ferroelectric to antiferroelectric transition well below TC further limits their usefulness.", "title": "" }, { "docid": "c3af6eae1bd5f2901914d830280eca48", "text": "This paper proposes a novel approach for the classification of 3D shapes exploiting surface and volumetric clues inside a deep learning framework. The proposed algorithm uses three different data representations. The first is a set of depth maps obtained by rendering the 3D object. The second is a novel volumetric representation obtained by counting the number of filled voxels along each direction. Finally NURBS surfaces are fitted over the 3D object and surface curvature parameters are selected as the third representation. All the three data representations are fed to a multi-branch Convolutional Neural Network. Each branch processes a different data source and produces a feature vector by using convolutional layers of progressively reduced resolution. The extracted feature vectors are fed to a linear classifier that combines the outputs in order to get the final predictions. Experimental results on the ModelNet dataset show that the proposed approach is able to obtain a state-of-the-art performance.", "title": "" }, { "docid": "5abc2b1536d989ff77e23ee9db22f625", "text": "Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.", "title": "" }, { "docid": "e6298cd08f89d3cb8a6f8a78c2f4ae49", "text": "We present a fast pattern matching algorithm with a large set of templates. The algorithm is based on the typical template matching speeded up by the dual decomposition; the Fourier transform and the Karhunen-Loeve transform. The proposed algorithm is appropriate for the search of an object with unknown distortion within a short period. Patterns with different distortion differ slightly from each other and are highly correlated. The image vector subspace required for effective representation can be defined by a small number of eigenvectors derived by the Karhunen-Loeve transform. A vector subspace spanned by the eigenvectors is generated, and any image vector in the subspace is considered as a pattern to be recognized. The pattern matching of objects with unknown distortion is formulated as the process to extract the portion of the input image, find the pattern most similar to the extracted portion in the subspace, compute normalized correlation between them at each location in the input image, and find the location with the best score. Searching for objects with unknown distortion requires vast computation. The formulation above makes it possible to decompose highly correlated reference images into eigenvectors, as well as to decompose images in frequency domain, and to speed up the process significantly. Index Terms —Template matching, pattern matching, Karhunen-Loeve transform, Fourier transform, eigenvector. —————————— ✦ ——————————", "title": "" } ]
scidocsrr
6587bb0346c0a5cf7e802580b6671f89
Robust and Discriminative Self-Taught Learning
[ { "docid": "2c30b761ec425c6bd8fff97a9ce4868c", "text": "We propose a joint representation and classification framework that achieves the dual goal of finding the most discriminative sparse overcomplete encoding and optimal classifier parameters. Formulating an optimization problem that combines the objective function of the classification with the representation error of both labeled and unlabeled data, constrained by sparsity, we propose an algorithm that alternates between solving for subsets of parameters, whilst preserving the sparsity. The method is then evaluated over two important classification problems in computer vision: object categorization of natural images using the Caltech 101 database and face recognition using the Extended Yale B face database. The results show that the proposed method is competitive against other recently proposed sparse overcomplete counterparts and considerably outperforms many recently proposed face recognition techniques when the number training samples is small.", "title": "" } ]
[ { "docid": "80759a5c2e60b444ed96c9efd515cbdf", "text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.", "title": "" }, { "docid": "58f6247a0958bf0087620921c99103b1", "text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.", "title": "" }, { "docid": "19a73e2e729fa115a89c64058eafc9ca", "text": "This paper aims to present a framework for describing Customer Knowledge Management in online purchase process using two models from literature including consumer online purchase process and ECKM. Since CKM is a recent concept and little empirical research is available, we will first present the theories from which CKM derives. In the first stage we discuss about e-commerce trend and increasing importance of customer loyalty in today’s business environment. Then some related concepts about Knowledge Management, Customer Relationship Management and CKM are presented, in order to provide the reader with a better understanding and clear picture regarding CKM. Finally, providing models representing e-CKM and online purchasing process, we propose a comprehensive procedure to manage customer data and knowledge in e-commerce.", "title": "" }, { "docid": "c8f3b235811dd64b9b1d35d596ff22f5", "text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.", "title": "" }, { "docid": "9d2859ee4e5968237078933e117475f8", "text": "This paper reports on an interview-based study of 18 authors of different chapters of the two-volume book \"Architecture of Open-Source Applications\". The main contributions are a synthesis of the process of authoring essay-style documents (ESDs) on software architecture, a series of observations on important factors that influence the content and presentation of architectural knowledge in this documentation form, and a set of recommendations for readers and writers of ESDs on software architecture. We analyzed the influence of three factors in particular: the evolution of a system, the community involvement in the project, and the personal characteristics of the author. This study provides the first systematic investigation of the creation of ESDs on software architecture. The observations we collected have implications for both readers and writers of ESDs, and for architecture documentation in general.", "title": "" }, { "docid": "b93446bab637abd4394338615a5ef6e9", "text": "Genetic programming is a methodology inspired by biological evolution. By using computational analogs to biological crossover and mutation new versions of a program are generated automatically. This population of new programs is then evaluated by an user defined fittness function to only select the programs that show an improved behavior as compared to the original program. In this case the desired behavior is to retain all original functionality and additionally fixing bugs found in the program code.", "title": "" }, { "docid": "e42a1faf3d983bac59c0bfdd79212093", "text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future", "title": "" }, { "docid": "ce2a19f9f3ee13978845f1ede238e5b2", "text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.", "title": "" }, { "docid": "1d9361cffd8240f3b691c887def8e2f5", "text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.", "title": "" }, { "docid": "082517b83d9a9cdce3caef62a579bf2e", "text": "To enable autonomous driving, a semantic knowledge of the environment is unavoidable. We therefore introduce a multiclass classifier to determine the classes of an object relying solely on radar data. This is a challenging problem as objects of the same category have often a diverse appearance in radar data. As classification methods a random forest classifier and a deep convolutional neural network are evaluated. To get good results despite the limited training data available, we introduce a hybrid approach using an ensemble consisting of the two classifiers. Further we show that the accuracy can be improved significantly by allowing a lower detection rate.", "title": "" }, { "docid": "137fd50e270703682b7233214c18803e", "text": "As a representative of NO-SQL database, MongoDB is widely preferred for its automatic load-balancing to some extent, which including distributing read load to secondary node to reduce the load of primary one and auto-sharding to reduce the load onspecific node through automatically split data and migrate some ofthem to other nodes. However, on one hand, this process is storage-load -- Cbased, which can't meet the demand due to the facts that some particular data are accessed much more frequently than others and the 'heat' is not constant as time going on, thus the load on a node keeps changing even if with unchanged data. On the other hand, data migration will bring out too much cost to affect performance of system. In this paper, we will focus on the mechanism of automatic load balancing of MongoDB and proposean heat-based dynamic load balancing mechanism with much less cost.", "title": "" }, { "docid": "b4fa57fec99131cdf0cb6fc4795fce43", "text": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "3b27d6bae4600236fea1e44367a58edf", "text": "We present a general framework for incorporating sequential data and arbitrary features into language modeling. The general framework consists of two parts: a hidden Markov component and a recursive neural network component. We demonstrate the effectiveness of our model by applying it to a specific application: predicting topics and sentiments in dialogues. Experiments on real data demonstrate that our method is substantially more accurate than previ-", "title": "" }, { "docid": "a6287828106cdfa0360607504016eff1", "text": "Predicting emotion categories, such as anger, joy, and anxiety, expressed by a sentence is challenging due to its inherent multi-label classification difficulty and data sparseness. In this paper, we address above two challenges by incorporating the label dependence among the emotion labels and the context dependence among the contextual instances into a factor graph model. Specifically, we recast sentence-level emotion classification as a factor graph inferring problem in which the label and context dependence are modeled as various factor functions. Empirical evaluation demonstrates the great potential and effectiveness of our proposed approach to sentencelevel emotion classification. 1", "title": "" }, { "docid": "7a6fcfbcfafa96b8e0e52f7356049f6f", "text": "This paper shows that decision trees can be used to improve the performance of case-based learning (CBL) systems. We introduce a performance task for machine learning systems called semi-exible prediction that lies between the classiication task performed by decision tree algorithms and the exible prediction task performed by conceptual clustering systems. In semi-exible prediction, learning should improve prediction of a spe-ciic set of features known a priori rather than a single known feature (as in classii-cation) or an arbitrary set of features (as in conceptual clustering). We describe one such task from natural language processing and present experiments that compare solutions to the problem using decision trees, CBL, and a hybrid approach that combines the two. In the hybrid approach, decision trees are used to specify the features to be included in k-nearest neighbor case retrieval. Results from the experiments show that the hybrid approach outperforms both the decision tree and case-based approaches as well as two case-based systems that incorporate expert knowledge into their case retrieval algorithms. Results clearly indicate that decision trees can be used to improve the performance of CBL systems and do so without reliance on potentially expensive expert knowledge.", "title": "" }, { "docid": "868c64332ae433159a45c1cfbe283341", "text": "The term \"artificial intelligence\" is a buzzword today and is heavily used to market products, services, research, conferences, and more. It is scientifically disputed which types of products and services do actually qualify as \"artificial intelligence\" versus simply advanced computer technologies mimicking aspects of natural intelligence.\n Yet it is undisputed that, despite often inflationary use of the term, there are mainstream products and services today that for decades were only thought to be science fiction. They range from industrial automation, to self-driving cars, robotics, and consumer electronics for smart homes, workspaces, education, and many more contexts.\n Several technological advances enable what is commonly referred to as \"artificial intelligence\". It includes connected computers and the Internet of Things (IoT), open and big data, low cost computing and storage, and many more. Yet regardless of the definition of the term artificial intelligence, technological advancements in this area provide immense potential, especially for people with disabilities.\n In this paper we explore some of these potential in the context of web accessibility. We review some existing products and services, and their support for web accessibility. We propose accessibility conformance evaluation as one potential way forward, to accelerate the uptake of artificial intelligence, to improve web accessibility.", "title": "" }, { "docid": "2fcaccc147377b4f59998d703bed5733", "text": "We present a multi-species model for the simulation of gravity driven landslides and debris flows with porous sand and water interactions. We use continuum mixture theory to describe individual phases where each species individually obeys conservation of mass and momentum and they are coupled through a momentum exchange term. Water is modeled as a weakly compressible fluid and sand is modeled with an elastoplastic law whose cohesion varies with water saturation. We use a two-grid Material Point Method to discretize the governing equations. The momentum exchange term in the mixture theory is relatively stiff and we use semi-implicit time stepping to avoid associated small time steps. Our semi-implicit treatment is explicit in plasticity and preserves symmetry of force linearizations. We develop a novel regularization of the elastic part of the sand constitutive model that better mimics plasticity during the implicit solve to prevent numerical cohesion artifacts that would otherwise have occurred. Lastly, we develop an improved return mapping for sand plasticity that prevents volume gain artifacts in the traditional Drucker-Prager model.", "title": "" }, { "docid": "bda419b065c53853f86f7fdbf0e330f2", "text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.", "title": "" }, { "docid": "65405e7f9b510f3a15d826e9969426f2", "text": "Human concept learning is particularly impressive in two respects: the internal structure of concepts can be representationally rich, and yet the very same concepts can also be learned from just a few examples. Several decades of research have dramatically advanced our understanding of these two aspects of concepts. While the richness and speed of concept learning are most often studied in isolation, the power of human concepts may be best explained through their synthesis. This paper presents a large-scale empirical study of one-shot concept learning, suggesting that rich generative knowledge in the form of a motor program can be induced from just a single example of a novel concept. Participants were asked to draw novel handwritten characters given a reference form, and we recorded the motor data used for production. Multiple drawers of the same character not only produced visually similar drawings, but they also showed a striking correspondence in their strokes, as measured by their number, shape, order, and direction. This suggests that participants can infer a rich motorbased concept from a single example. We also show that the motor programs induced by individual subjects provide a powerful basis for one-shot classification, yielding far higher accuracy than state-of-the-art pattern recognition methods based on just the visual form.", "title": "" } ]
scidocsrr
462ad0f689280722d97c4145ad0e7c82
Employing a fully convolutional neural network for road marking detection
[ { "docid": "7228ebec1e9ffddafab50e3ac133ebad", "text": "Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.", "title": "" }, { "docid": "884121d37d1b16d7d74878fb6aff9cdb", "text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.", "title": "" }, { "docid": "ca1cc40633a97f557b2c97e135534e27", "text": "This paper presents a real-time long-range lane detection and tracking approach to meet the requirements of the high-speed intelligent vehicles running on highway roads. Based on a linear-parabolic two-lane highway road model and a novel strong lane marking feature named Lane Marking Segmentation, the maximal lane detection distance of this approach is up to 120 meters. Then the lane lines are selected and tracked by estimating the ego vehicle lateral offset with a Kalman filter. Experiment results with test dataset extracted from real traffic scenes on highway roads show that the approaches proposed in this paper can achieve a high detection rate with a low time cost.", "title": "" } ]
[ { "docid": "8b002f094c6979f718426f46766b122b", "text": "Recent developments in smartphones create an ideal platform for robotics and computer vision applications: they are small, powerful, embedded devices with low-power mobile CPUs. However, though the computational power of smartphones has increased substantially in recent years, they are still not capable of performing intense computer vision tasks in real time, at high frame rates and low latency. We present a combination of FPGA and mobile CPU to overcome the computational and latency limitations of mobile CPUs alone. With the FPGA as an additional layer between the image sensor and CPU, the system is capable of accelerating computer vision algorithms to real-time performance. Low latency calculation allows for direct usage within control loops of mobile robots. A stereo camera setup with disparity estimation based on the semi global matching algorithm is implemented as an accelerated example application. The system calculates dense disparity images with 752×480 pixels resolution at 60 frames per second. The overall latency of the disparity estimation is less than 2 milliseconds. The system is suitable for any mobile robot application due to its light weight and low power consumption.", "title": "" }, { "docid": "9003a12f984d2bf2fd84984a994770f0", "text": "Sulfated polysaccharides and their lower molecular weight oligosaccharide derivatives from marine macroalgae have been shown to possess a variety of biological activities. The present paper will review the recent progress in research on the structural chemistry and the bioactivities of these marine algal biomaterials. In particular, it will provide an update on the structural chemistry of the major sulfated polysaccharides synthesized by seaweeds including the galactans (e.g., agarans and carrageenans), ulvans, and fucans. It will then review the recent findings on the anticoagulant/antithrombotic, antiviral, immuno-inflammatory, antilipidemic and antioxidant activities of sulfated polysaccharides and their potential for therapeutic application.", "title": "" }, { "docid": "48c49e1f875978ec4e2c1d4549a98ffd", "text": "Deep neural networks have been shown to be very powerful modeling tools for many supervised learning tasks involving complex input patterns. However, they can also easily overfit to training set biases and label noises. In addition to various regularizers, example reweighting algorithms are popular solutions to these problems, but they require careful tuning of additional hyperparameters, such as example mining schedules and regularization hyperparameters. In contrast to past reweighting methods, which typically consist of functions of the cost value of each example, in this work we propose a novel meta-learning algorithm that learns to assign weights to training examples based on their gradient directions. To determine the example weights, our method performs a meta gradient descent step on the current mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.", "title": "" }, { "docid": "ff1cc31ab089d5d1d09002866c7dc043", "text": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field.", "title": "" }, { "docid": "35a6a9b41273d6064d4daf5f39f621af", "text": "A systematic approach to develop a literature review is attractive because it aims to achieve a repeatable, unbiased and evidence-based outcome. However the existing form of systematic review such as Systematic Literature Review (SLR) and Systematic Mapping Study (SMS) are known to be an effort, time, and intellectual intensive endeavour. To address these issues, this paper proposes a model-based approach to Systematic Review (SR) production. The approach uses a domain-specific language expressed as a meta-model to represent research literature, a meta-model to specify SR constructs in a uniform manner, and an associated development process all of which can benefit from computer-based support. The meta-models and process are validated using real-life case study. We claim that the use of meta-modeling and model synthesis lead to a reduction in time, effort and the current dependence on human expertise.", "title": "" }, { "docid": "1cd0a8b7d12ca5e147408b1aaa4c5957", "text": "OpenMusic is an open source environment dedicated to music composition. The core of this environment is a full-featured visual programming language based on Common Lisp and CLOS (Common Lisp Object System) allowing to design processes for the generation or manipulation of musical material. This language can also be used for general purpose visual programming and other (possibly extra-musical) applications.", "title": "" }, { "docid": "8686ffed021b68574b4c3547d361eac8", "text": "* To whom all correspondence should be addressed. Abstract Face detection is an important prerequisite step for successful face recognition. The performance of previous face detection methods reported in the literature is far from perfect and deteriorates ungracefully where lighting conditions cannot be controlled. We propose a method that outperforms state-of-the-art face detection methods in environments with stable lighting. In addition, our method can potentially perform well in environments with variable lighting conditions. The approach capitalizes upon our near-IR skin detection method reported elsewhere [13][14]. It ascertains the existence of a face within the skin region by finding the eyes and eyebrows. The eyeeyebrow pairs are determined by extracting appropriate features from multiple near-IR bands. Very successful feature extraction is achieved by simple algorithmic means like integral projections and template matching. This is because processing is constrained in the skin region and aided by the near-IR phenomenology. The effectiveness of our method is substantiated by comparative experimental results with the Identix face detector [5].", "title": "" }, { "docid": "20acbae6f76e3591c8b696481baffc90", "text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.", "title": "" }, { "docid": "33bd561e2d8e1799d5d5156cbfe3f2e5", "text": "OBJECTIVE\nTo assess the effects of Balint groups on empathy measured by the Consultation And Relational Empathy Measure (CARE) scale rated by standardized patients during objective structured clinical examination and self-rated Jefferson's School Empathy Scale - Medical Student (JSPE-MS©) among fourth-year medical students.\n\n\nMETHODS\nA two-site randomized controlled trial were planned, from October 2015 to December 2015 at Paris Diderot and Paris Descartes University, France. Eligible students were fourth-year students who gave their consent to participate. Participants were allocated in equal proportion to the intervention group or to the control group. Participants in the intervention group received a training of 7 sessions of 1.5-hour Balint groups, over 3months. The main outcomes were CARE and the JSPE-MS© scores at follow-up.\n\n\nRESULTS\nData from 299 out of 352 randomized participants were analyzed: 155 in the intervention group and 144 in the control group, with no differences in baseline measures. There was no significant difference in CARE score at follow-up between the two groups (P=0.49). The intervention group displayed significantly higher JSPE-MS© score at follow-up than the control group [Mean (SD): 111.9 (10.6) versus 107.7 (12.7), P=0.002]. The JSPE-MS© score increased from baseline to follow-up in the intervention group, whereas it decreased in the control group [1.5 (9.1) versus -1.8 (10.8), P=0.006].\n\n\nCONCLUSIONS\nBalint groups may contribute to promote clinical empathy among medical students.\n\n\nTRIAL REGISTRATION\nNCT02681380.", "title": "" }, { "docid": "0072941488ef0e22b06d402d14cbe1be", "text": "This chapter is about computational modelling of the process of musical composition, based on a cognitive model of human behaviour. The idea is to try to study not only the requirements for a computer system which is capable of musical composition, but also to relate it to human behaviour during the same process, so that it may, perhaps, work in the same way as a human composer, but also so that it may, more likely, help us understand how human composers work. Pearce et al. (2002) give a fuller discussion of the motivations behind this endeavour.", "title": "" }, { "docid": "24da291ca2590eb614f94f8a910e200d", "text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.", "title": "" }, { "docid": "583623f15d855131d190fcef37839999", "text": "Service providers want to reduce datacenter costs by consolidating workloads onto fewer servers. At the same time, customers have performance goals, such as meeting tail latency Service Level Objectives (SLOs). Consolidating workloads while meeting tail latency goals is challenging, especially since workloads in production environments are often bursty. To limit the congestion when consolidating workloads, customers and service providers often agree upon rate limits. Ideally, rate limits are chosen to maximize the number of workloads that can be co-located while meeting each workload's SLO. In reality, neither the service provider nor customer knows how to choose rate limits. Customers end up selecting rate limits on their own in some ad hoc fashion, and service providers are left to optimize given the chosen rate limits.\n This paper describes WorkloadCompactor, a new system that uses workload traces to automatically choose rate limits simultaneously with selecting onto which server to place workloads. Our system meets customer tail latency SLOs while minimizing datacenter resource costs. Our experiments show that by optimizing the choice of rate limits, WorkloadCompactor reduces the number of required servers by 30--60% as compared to state-of-the-art approaches.", "title": "" }, { "docid": "f21e55c7509124be8fabfb1d706d76aa", "text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.", "title": "" }, { "docid": "03329ce0d0d9cc0582d00310f22366fe", "text": "Wireless personal area network and wireless sensor networks are rapidly gaining popularity, and the IEEE 802.15 Wireless Personal Area Working Group has defined no less than different standards so as to cater to the requirements of different applications. The ubiquitous home network has gained widespread attentions due to its seamless integration into everyday life. This innovative system transparently unifies various home appliances, smart sensors and energy technologies. The smart energy market requires two types of ZigBee networks for device control and energy management. Today, organizations use IEEE 802.15.4 and ZigBee to effectively deliver solutions for a variety of areas including consumer electronic device control, energy management and efficiency, home and commercial building automation as well as industrial plant management. We present the design of a multi-sensing, heating and airconditioning system and actuation application - the home users: a sensor network-based smart light control system for smart home and energy control production. This paper designs smart home device descriptions and standard practices for demand response and load management \"Smart Energy\" applications needed in a smart energy based residential or light commercial environment. The control application domains included in this initial version are sensing device control, pricing and demand response and load control applications. This paper introduces smart home interfaces and device definitions to allow interoperability among ZigBee devices produced by various manufacturers of electrical equipment, meters, and smart energy enabling products. We introduced the proposed home energy control systems design that provides intelligent services for users and we demonstrate its implementation using a real testbad.", "title": "" }, { "docid": "e19743c3b2402090f9647f669a14d554", "text": "To investigate the relation between vocal prosody and change in depression severity over time, 57 participants from a clinical trial for treatment of depression were evaluated at seven-week intervals using a semistructured clinical interview for depression severity (Hamilton Rating Scale for Depression (HRSD)). All participants met criteria for major depressive disorder (MDD) at week one. Using both perceptual judgments by naive listeners and quantitative analyses of vocal timing and fundamental frequency, three hypotheses were tested: 1) Naive listeners can perceive the severity of depression from vocal recordings of depressed participants and interviewers. 2) Quantitative features of vocal prosody in depressed participants reveal change in symptom severity over the course of depression. 3) Interpersonal effects occur as well; such that vocal prosody in interviewers shows corresponding effects. These hypotheses were strongly supported. Together, participants' and interviewers' vocal prosody accounted for about 60 percent of variation in depression scores, and detected ordinal range of depression severity (low, mild, and moderate-to-severe) in 69 percent of cases (kappa = 0.53). These findings suggest that analysis of vocal prosody could be a powerful tool to assist in depression screening and monitoring over the course of depressive disorder and recovery.", "title": "" }, { "docid": "86429b47cefce29547ee5440a8410b83", "text": "AIM\nThe purpose of the study was to observe the outcome of trans-fistula anorectoplasty (TFARP) in treating female neonates with anorectovestibular fistula (ARVF).\n\n\nMETHODS\nA prospective study was carried out on female neonates with vestibular fistula, admitted into the surgical department of a tertiary level children hospital during the period from January 2009 to June 2011. TFARP without a covering colostomy was performed for definitive correction in the neonatal period in all. Data regarding demographics, clinical presentation, associated anomalies, preoperative findings, preoperative preparations, operative technique, difficulties faced during surgery, duration of surgery, postoperative course including complications, hospital stay, bowel habits and continence was prospectively compiled and analyzed. Anorectal function was measured by the modified Wingspread scoring as, \"excellent\", \"good\", \"fair\" and \"poor\".\n\n\nRESULTS\nThirty-nine neonates with vestibular fistula underwent single stage TFARP. Mean operation time was 81 minutes and mean hospital stay was 6 days. Three (7.7%) patients suffered vaginal tear during separation from the rectal wall. Two patients (5.1%) developed wound infection at neoanal site that resulted in anal stenosis. Eight (20.51%) children in the series are more than 3 years of age and are continent; all have attained \"excellent\" fecal continence score. None had constipation or soiling. Other 31 (79.5%) children less than 3 years of age have satisfactory anocutaneous reflex and anal grip on per rectal digital examination, though occasional soiling was observed in 4 patients.\n\n\nCONCLUSION\nPrimary repair of ARVF in female neonates by TFARP without dividing the perineum is a feasible procedure with good cosmetic appearance and good anal continence. Separation of the rectum from the posterior wall of vagina is the most delicate step of the operation, takes place under direct vision. It is very important to keep the perineal body intact. With meticulous preoperative bowel preparation and post operative wound care and bowel management, single stage reconstruction is possible in neonates with satisfactory results.", "title": "" }, { "docid": "9d98fe5183d53bfaaa42e642bc03b9b3", "text": "Cyber-attacks continue to increase worldwide, leading to significant loss or misuse of information assets. Most of the existing intrusion detection systems rely on per-packet inspection, a resource consuming task in today’s high speed networks. A recent trend is to analyze netflows (or simply flows) instead of packets, a technique performed at a relative low level leading to high false alarm rates. Since analyzing raw data extracted from flows lacks the semantic information needed to discover attacks, a novel approach is introduced, which uses contextual information to automatically identify and query possible semantic links between different types of suspicious activities extracted from flows. Time, location, and other contextual information mined from flows is applied to generate semantic links among alerts raised in response to suspicious flows. These semantic links are identified through an inference process on probabilistic semantic link networks (SLNs), which receive an initial prediction from a classifier that analyzes incoming flows. The SLNs are then queried at run-time to retrieve other relevant predictions. We show that our approach can be extended to detect unknown attacks in flows as variations of known attacks. An extensive validation of our approach has been performed with a prototype system on several benchmark datasets yielding very promising results in detecting both known and unknown attacks.", "title": "" }, { "docid": "a0850b5f8b2d994b50bb912d6fca3dfb", "text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.", "title": "" }, { "docid": "40dc7de2a08c07183606235500df3c4f", "text": "Aerial imagery of an urban environment is often characterized by significant occlusions, sharp edges, and textureless regions, leading to poor 3D reconstruction using conventional multi-view stereo methods. In this paper, we propose a novel approach to 3D reconstruction of urban areas from a set of uncalibrated aerial images. A very general structural prior is assumed that urban scenes consist mostly of planar surfaces oriented either in a horizontal or an arbitrary vertical orientation. In addition, most structural edges associated with such surfaces are also horizontal or vertical. These two assumptions provide powerful constraints on the underlying 3D geometry. The main contribution of this paper is to translate the two constraints on 3D structure into intra-image-column and inter-image-column constraints, respectively, and to formulate the dense reconstruction as a 2-pass Dynamic Programming problem, which is solved in complete parallel on a GPU. The result is an accurate cloud of 3D dense points of the underlying urban scene. Our algorithm completes the reconstruction of 1M points with 160 available discrete height levels in under a hundred seconds. Results on multiple datasets show that we are capable of preserving a high level of structural detail and visual quality.", "title": "" }, { "docid": "ca410a7cf7f36fdd145aed738f147d3f", "text": "A range of values of a real function f : Ed + Iw can be used to implicitly define a subset of Euclidean space Ed. Such “implicit functions” have many uses in geometric and solid modeling. This paper focuses on the properties and construction of real functions for the representation of rigid solids (compact, semi-analytic, and regular subsets of Ed). We review some known facts about real functions defining compact semi-analytic sets, and their applications. The theory of R-functions developed in (Rvachev, 1982) provides means for constructing real function representations of solids described by the standard (non-regularized) set operations. But solids are not closed under the standard set operations, and such real function representations are rarely available in modem solid modeling systems. More generally, assuring that a real function f represents a regular set may be difficult. Until now, the regularity has either been assumed, or treated in an ad hoc fashion. We show that topological and extremal properties of real functions can be used to test for regularity, and discuss procedures for constructing real functions with desired properties for arbitrary solids.", "title": "" } ]
scidocsrr
e1dd082607bfcef921ce86b9ea05a6b5
Supervised and Semi-Supervised Text Categorization using LSTM for Region Embeddings
[ { "docid": "a3866467e9a5a1ee2e35b9f2e477a3e3", "text": "This paper presents a new semi-supervised framework with convolutional neural networks (CNNs) for text categorization. Unlike the previous approaches that rely on word embeddings, our method learns embeddings of small text regions from unlabeled data for integration into a supervised CNN. The proposed scheme for embedding learning is based on the idea of two-view semi-supervised learning, which is intended to be useful for the task of interest even though the training is done on unlabeled data. Our models achieve better results than previous approaches on sentiment classification and topic classification tasks.", "title": "" } ]
[ { "docid": "0fd4b7ed6e3c67fb9d4bb70e83d8796c", "text": "The biological properties of dietary polyphenols are greatly dependent on their bioavailability that, in turn, is largely influenced by their degree of polymerization. The gut microbiota play a key role in modulating the production, bioavailability and, thus, the biological activities of phenolic metabolites, particularly after the intake of food containing high-molecular-weight polyphenols. In addition, evidence is emerging on the activity of dietary polyphenols on the modulation of the colonic microbial population composition or activity. However, although the great range of health-promoting activities of dietary polyphenols has been widely investigated, their effect on the modulation of the gut ecology and the two-way relationship \"polyphenols ↔ microbiota\" are still poorly understood. Only a few studies have examined the impact of dietary polyphenols on the human gut microbiota, and most were focused on single polyphenol molecules and selected bacterial populations. This review focuses on the reciprocal interactions between the gut microbiota and polyphenols, the mechanisms of action and the consequences of these interactions on human health.", "title": "" }, { "docid": "6b933bbad26efaf65724d0c923330e75", "text": "This paper presents a 138-170 GHz active frequency doubler implemented in a 0.13 μm SiGe BiCMOS technology with a peak output power of 5.6 dBm and peak power-added efficiency of 7.6%. The doubler achieves a peak conversion gain of 4.9 dB and consumes only 36 mW of DC power at peak drive through the use of a push-push frequency doubling stage optimized for low drive power, along with a low-power output buffer. To the best of our knowledge, this doubler achieves the highest output power, efficiency, and fundamental frequency suppression of all D-band and G-band SiGe HBT frequency doublers to date.", "title": "" }, { "docid": "609729da28fec217c5c7cdbb48b8bde2", "text": "We introduce a theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search. Instead, it runs many MonteCarlo simulations guided by reinforcement learning from previous proof attempts. We produce several versions of the prover, parameterized by different learning and guiding algorithms. The strongest version of the system is trained on a large corpus of mathematical problems and evaluated on previously unseen problems. The trained system solves within the same number of inferences over 40% more problems than a baseline prover, which is an unusually high improvement in this hard AI domain. To our knowledge this is the first time reinforcement learning has been convincingly applied to solving general mathematical problems on a large scale.", "title": "" }, { "docid": "5dac8ef81c7a6c508c603b3fd6a87581", "text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "title": "" }, { "docid": "3585ee8052b23d2ea996dc8ad14cbb04", "text": "The 5th generation (5G) of mobile radio access technologies is expected to become available for commercial launch around 2020. In this paper, we present our envisioned 5G system design optimized for small cell deployment taking a clean slate approach, i.e. removing most compatibility constraints with the previous generations of mobile radio access technologies. This paper mainly covers the physical layer aspects of the 5G concept design.", "title": "" }, { "docid": "0823b3c01d54f479ca8fe470f0e41c66", "text": "Social media is emerging as an important information-based communication tool for disaster management. Yet there are many relief organizations that are not able to develop strategies and allocate resources to effectively use social media for disaster management. The reason behind this inability may be a lack of understanding regarding the different functionalities of social media. In this paper, we examine the literature using content analysis to understand the current usage of social media in disaster management. We draw on the honeycomb framework and the results of our content analysis to suggest a new framework that can help in utilizing social media more effectively during the different phases of disaster management. We also discuss the implications of our study. KEywORDS Disaster Management, Disaster Phases, Honeycomb Framework, Social Media Functionality, Social Media", "title": "" }, { "docid": "a0407424fce71b9e4119d1d9fefc5542", "text": "The design and development of complex engineering products require the efforts and collaboration of hundreds of participants from diverse backgrounds resulting in complex relationships among both people and tasks. Many of the traditional project management tools (PERT, Gantt and CPM methods) do not address problems stemming from this complexity. While these tools allow the modeling of sequential and parallel processes, they fail to address interdependency (feedback and iteration), which is common in complex product development (PD) projects. To address this issue, a matrix-based tool called the Design Structure Matrix (DSM) has evolved. This method differs from traditional project-management tools because it focuses on representing information flows rather than work flows. The DSM method is an information exchange model that allows the representation of complex task (or team) relationships in order to determine a sensible sequence (or grouping) for the tasks (or teams) being modeled. This article will cover how the basic method works and how you can use the DSM to improve the planning, execution, and management of complex PD projects using different algorithms (i.e., partitioning, tearing, banding, clustering, simulation, and eigenvalue analysis). Introduction: matrices and projects Consider a system (or project) that is composed of two elements /sub-systems (or activities/phases): element \"A\" and element \"B\". A graph may be developed to represent this system pictorially. The graph is constructed by allowing a vertex/node on the graph to represent a system element and an edge joining two nodes to represent the relationship between two system elements. The directionality of influence from one element to another is captured by an arrow instead of a simple link. The resultant graph is called a directed graph or simply a digraph. There are three basic building blocks for describing the relationship amongst system elements: parallel (or concurrent), sequential (or dependent) and coupled (or interdependent) (fig. 1) Fig.1 Three Configurations that Characterize a System Relationship Parallel Sequential Coupled Graph Representation A B A", "title": "" }, { "docid": "1beba2c797cb5a4b72b54fd71265a25f", "text": "Modularity is widely used to effectively measure the strength of the community structure found by community detection algorithms. However, modularity maximization suffers from two opposite yet coexisting problems: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones. The latter tendency is known in the literature as the resolution limit problem. To address them, we propose to modify modularity by subtracting from it the fraction of edges connecting nodes of different communities and by including community density into modularity. We refer to the modified metric as Modularity Density and we demonstrate that it indeed resolves both problems mentioned above. We describe the motivation for introducing this metric by using intuitively clear and simple examples. We also prove that this new metric solves the resolution limit problem. Finally, we discuss the results of applying this metric, modularity, and several other popular community quality metrics to two real dynamic networks. The results imply that Modularity Density is consistent with all the community quality measurements but not modularity, which suggests that Modularity Density is an improved measurement of the community quality compared to modularity.", "title": "" }, { "docid": "781ebbf85a510cfd46f0c824aa4aba7e", "text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.", "title": "" }, { "docid": "34992b86a8ac88c5f5bbca770954ae61", "text": "Entity search over text corpora is not geared for relationship queries where answers are tuples of related entities and where a query often requires joining cues from multiple documents. With large knowledge graphs, structured querying on their relational facts is an alternative, but often suffers from poor recall because of mismatches between user queries and the knowledge graph or because of weakly populated relations.\n This paper presents the TriniT search engine for querying and ranking on extended knowledge graphs that combine relational facts with textual web contents. Our query language is designed on the paradigm of SPO triple patterns, but is more expressive, supporting textual phrases for each of the SPO arguments. We present a model for automatic query relaxation to compensate for mismatches between the data and a user's query. Query answers -- tuples of entities -- are ranked by a statistical language model. We present experiments with different benchmarks, including complex relationship queries, over a combination of the Yago knowledge graph and the entity-annotated ClueWeb'09 corpus.", "title": "" }, { "docid": "afa7ccbc17103f199abc38e98b6049bf", "text": "Cloud computing is becoming a popular paradigm. Many recent new services are based on cloud environments, and a lot of people are using cloud networks. Since many diverse hosts and network configurations coexist in a cloud network, it is essential to protect each of them in the cloud network from threats. To do this, basically, we can employ existing network security devices, but applying them to a cloud network requires more considerations for its complexity, dynamism, and diversity. In this paper, we propose a new framework, CloudWatcher, which provides monitoring services for large and dynamic cloud networks. This framework automatically detours network packets to be inspected by pre-installed network security devices. In addition, all these operations can be implemented by writing a simple policy script, thus, a cloud network administrator is able to protect his cloud network easily. We have implemented the proposed framework, and evaluated it on different test network environments.", "title": "" }, { "docid": "921d9dc34f32522200ddcd606d22b6b4", "text": "The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We first introduce a single-objective, elitist CMA-ES using plus-selection and step size control based on a success rule. This algorithm is compared to the standard CMA-ES. The elitist CMA-ES turns out to be slightly faster on unimodal functions, but is more prone to getting stuck in sub-optimal local minima. In the new multi-objective CMAES (MO-CMA-ES) a population of individuals that adapt their search strategy as in the elitist CMA-ES is maintained. These are subject to multi-objective selection. The selection is based on non-dominated sorting using either the crowding-distance or the contributing hypervolume as second sorting criterion. Both the elitist single-objective CMA-ES and the MO-CMA-ES inherit important invariance properties, in particular invariance against rotation of the search space, from the original CMA-ES. The benefits of the new MO-CMA-ES in comparison to the well-known NSGA-II and to NSDE, a multi-objective differential evolution algorithm, are experimentally shown.", "title": "" }, { "docid": "22293b6953e2b28e1b3dc209649a7286", "text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.", "title": "" }, { "docid": "fe03dc323c15d5ac390e67f9aa0415b8", "text": "Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that they convey significant information about material properties and physical interactions.", "title": "" }, { "docid": "c82901a585d9c924f4686b4d0373e774", "text": "Object detection is a major challenge in computer vision, involving both object classification and object localization within a scene. While deep neural networks have been shown in recent years to yield very powerful techniques for tackling the challenge of object detection, one of the biggest challenges with enabling such object detection networks for widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been an increasing focus in exploring small deep neural network architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired by the efficiency of the Fire microarchitecture introduced in SqueezeNet and the object detection performance of the singleshot detection macroarchitecture introduced in SSD, this paper introduces Tiny SSD, a single-shot detection deep convolutional neural network for real-time embedded object detection that is composed of a highly optimized, non-uniform Fire subnetwork stack and a non-uniform sub-network stack of highly optimized SSD-based auxiliary convolutional feature layers designed specifically to minimize model size while maintaining object detection performance. The resulting Tiny SSD possess a model size of 2.3MB (~26X smaller than Tiny YOLO) while still achieving an mAP of 61.3% on VOC 2007 (~4.2% higher than Tiny YOLO). These experimental results show that very small deep neural network architectures can be designed for real-time object detection that are well-suited for embedded scenarios.", "title": "" }, { "docid": "20cb6f1ecf0464751a3af5947f708c4d", "text": "Article History Received: 4 April 2018 Revised: 30 April 2018 Accepted: 2 May 2018 Published: 4 May 2018", "title": "" }, { "docid": "b8032e13156e0168e2c5850cdf452e5b", "text": "We observe that end-to-end memory networks (MN) trained for task-oriented dialogue, such as for recommending restaurants to a user, suffer from an out-ofvocabulary (OOV) problem – the entities returned by the Knowledge Base (KB) may not be seen by the network at training time, making it impossible for it to use them in dialogue. We propose a Hierarchical Pointer Memory Network (HyP-MN), in which the next word may be generated from the decode vocabulary or copied from a hierarchical memory maintaining KB results and previous utterances. Evaluating over the dialog bAbI tasks, we find that HyP-MN drastically outperforms MN obtaining 12% overall accuracy gains. Further analysis reveals that MN fails completely in recommending any relevant restaurant, whereas HyP-MN recommends the best next restaurant 80% of the time.", "title": "" }, { "docid": "831845dfb48d2bd9d7d86031f3862fa5", "text": "This paper presents the analysis and implementation of an LCLC resonant converter working as maximum power point tracker (MPPT) in a PV system. This converter must guarantee a constant DC output voltage and must vary its effective input resistance in order to extract the maximum power of the PV generator. Preliminary analysis concludes that not all resonant load topologies can achieve the design conditions for a MPPT. Only the LCLC and LLC converter are suitable for this purpose.", "title": "" }, { "docid": "7105302557aa312e3dedbc7d7cc6e245", "text": "a Canisius College, Richard J. Wehle School of Business, Department of Management and Marketing, 2001 Main Street, Buffalo, NY 14208-1098, United States b Clemson University, College of Business and Behavioral Science, Department of Marketing, 245 Sirrine Hall, Clemson, SC 29634-1325, United States c University of Alabama at Birmingham, School of Business, Department of Marketing, Industrial Distribution and Economics, 1150 10th Avenue South, Birmingham, AL 35294, United States d Vlerick School of Management Reep 1, BE-9000 Ghent Belgium", "title": "" }, { "docid": "1be58e70089b58ca3883425d1a46b031", "text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.", "title": "" } ]
scidocsrr
7b42e5fe2898a74a1aabdda96f2b3450
The MGB challenge: Evaluating multi-genre broadcast media recognition
[ { "docid": "a16139b8924fc4468086c41fedeef3d9", "text": "Grapheme-to-phoneme conversion is the task of finding the pronunciation of a word given its written form. It has important applications in text-to-speech and speech recognition. Joint-sequence models are a simple and theoretically stringent probabilistic framework that is applicable to this problem. This article provides a selfcontained and detailed description of this method. We present a novel estimation algorithm and demonstrate high accuracy on a variety of databases. Moreover we study the impact of the maximum approximation in training and transcription, the interaction of model size parameters, n-best list generation, confidence measures, and phoneme-to-grapheme conversion. Our software implementation of the method proposed in this work is available under an Open Source license.", "title": "" } ]
[ { "docid": "df175c91322be3a87dfba84793e9b942", "text": "Due to an increasing awareness about dental erosion, many clinicians would like to propose treatments even at the initial stages of the disease. However, when the loss of tooth structure is visible only to the professional eye, and it has not affected the esthetics of the smile, affected patients do not usually accept a full-mouth rehabilitation. Reducing the cost of the therapy, simplifying the clinical steps, and proposing noninvasive adhesive techniques may promote patient acceptance. In this article, the treatment of an ex-bulimic patient is illustrated. A modified approach of the three-step technique was followed. The patient completed the therapy in five short visits, including the initial one. No tooth preparation was required, no anesthesia was delivered, and the overall (clinical and laboratory) costs were kept low. At the end of the treatment, the patient was very satisfied from a biologic and functional point of view.", "title": "" }, { "docid": "a15275cc08ad7140e6dd0039e301dfce", "text": "Cardiovascular disease is more prevalent in type 1 and type 2 diabetes, and continues to be the leading cause of death among adults with diabetes. Although atherosclerotic vascular disease has a multi-factorial etiology, disorders of lipid metabolism play a central role. The coexistence of diabetes with other risk factors, in particular with dyslipidemia, further increases cardiovascular disease risk. A characteristic pattern, termed diabetic dyslipidemia, consists of increased levels of triglycerides, low levels of high density lipoprotein cholesterol, and postprandial lipemia, and is mostly seen in patients with type 2 diabetes or metabolic syndrome. This review summarizes the trends in the prevalence of lipid disorders in diabetes, advances in the mechanisms contributing to diabetic dyslipidemia, and current evidence regarding appropriate therapeutic recommendations.", "title": "" }, { "docid": "4b8ee1a2e6d80a0674e2ff8f940d16f9", "text": "Classification and knowledge extraction from complex spatiotemporal brain data such as EEG or fMRI is a complex challenge. A novel architecture named the NeuCube has been established in prior literature to address this. A number of key points in the implementation of this framework, including modular design, extensibility, scalability, the source of the biologically inspired spatial structure, encoding, classification, and visualisation tools must be considered. A Python version of this framework that conforms to these guidelines has been implemented.", "title": "" }, { "docid": "895b5d767119676e9eb5264eb3e6e7b1", "text": "This paper presents a preliminary design and analysis of an optimal energy management and control system for a parallel hybrid electric vehicle using hybrid dynamic control system theory and design tools. The vehicle longitudinal dynamics is analyzed. The practical operation modes of the hybrid electric vehicle are introduced with regard to the given power train configuration. In order to synthesize the vehicle continuous dynamics and the discrete transition between the vehicle operation modes, the hybrid dynamical system theory is applied to reformulate such a complex dynamical system in which the interaction of discrete and continuous dynamics are involved. A dynamic programming-based method is developed to determine the optimal power split between both sources of energy. Computer simulation results are presented and demonstrate the effectiveness of the proposed design and applicability and practicality of the design in real-time implementation. Copyright 2002 EVS19", "title": "" }, { "docid": "f7fa13048b42a566d8621f267141f80d", "text": "The software underpinning today's IT systems needs to adapt dynamically and predictably to rapid changes in system workload, environment and objectives. We describe a software framework that achieves such adaptiveness for IT systems whose components can be modelled as Markov chains. The framework comprises (i) an autonomic architecture that uses Markov-chain quantitative analysis to dynamically adjust the parameters of an IT system in line with its state, environment and objectives; and (ii) a method for developing instances of this architecture for real-world systems. Two case studies are presented that use the framework successfully for the dynamic power management of disk drives, and for the adaptive management of cluster availability within data centres, respectively.", "title": "" }, { "docid": "86bb5aab780892d89d7a0057f14fad9f", "text": "Complicated grief is a prolonged grief disorder with elements of a stress response syndrome. We have previously proposed a biobehavioral model showing the pathway to complicated grief. Avoidance is a component that can be difficult to assess and pivotal to treatment. Therefore we developed an avoidance questionnaire to characterize avoidance among patients with CG. We further explain our complicated grief model and provide results of a study of 128 participants in a treatment study of CG who completed a 15-item Grief-related Avoidance Questionnaire (GRAQ). Mean (SD) GRAQ score was 25. 0 ± 12.5 with a range of 0–60. Cronbach’s alpha was 0.87 and test re-test correlation was 0.88. Correlation analyses showed good convergent and discriminant validity. Avoidance of reminders of the loss contributed to functional impairment after controlling for other symptoms of complicated grief. In this paper we extend our previously described attachment-based biobehavioral model of CG. We envision CG as a stress response syndrome that results from failure to integrate information about death of an attachment figure into an effectively functioning secure base schema and/or to effectively re-engage the exploratory system in a world without the deceased. Avoidance is a key element of the model.", "title": "" }, { "docid": "c5851a9fe60c0127a351668ba5b0f21d", "text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.", "title": "" }, { "docid": "9cf4d68ab09e98cd5b897308c8791d26", "text": "Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of the movements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.", "title": "" }, { "docid": "8f444ac95ff664e06e1194dd096e4f31", "text": "Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.", "title": "" }, { "docid": "99ee1fe74b0b8a9679b8b7bd005d54ab", "text": "An essential characteristic in many e-commerce settings is that website visitors can have very specific short-term shopping goals when they browse the site. Relying solely on long-term user models that are pre-trained on historical data can therefore be insufficient for a suitable next-basket recommendation. Simple \"real-time\" recommendation approaches based, e.g., on unpersonalized co-occurrence patterns, on the other hand do not fully exploit the available information about the user's long-term preference profile. In this work, we aim to explore and quantify the effectiveness of using and combining long-term models and short-term adaptation strategies. We conducted an empirical evaluation based on a novel evaluation design and two real-world datasets. The results indicate that maintaining short-term content-based and recency-based profiles of the visitors can lead to significant accuracy increases. At the same time, the experiments show that the choice of the algorithm for learning the long-term preferences is particularly important at the beginning of new shopping sessions.", "title": "" }, { "docid": "eaec7fb5490ccabd52ef7b4b5abd25f6", "text": "Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.", "title": "" }, { "docid": "3763da6b72ee0a010f3803a901c9eeb2", "text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.", "title": "" }, { "docid": "e629f1935ab4f69ffaefdaa59b374a05", "text": "Higher-order low-rank tensors naturally arise in many applications including hyperspectral data recovery, video inpainting, seismic data reconstruction, and so on. We propose a new model to recover a low-rank tensor by simultaneously performing low-rank matrix factorizations to the all-mode matricizations of the underlying tensor. An alternating minimization algorithm is applied to solve the model, along with two adaptive rank-adjusting strategies when the exact rank is not known. Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods. Further tests on real-world data show similar advantages. Although our model is non-convex, our algorithm performs consistently throughout the tests and give better results than the compared methods, some of which are based on convex models. In addition, the global convergence of our algorithm can be established in the sense that the gradient of Lagrangian function converges to zero.", "title": "" }, { "docid": "5739713d17ec5cc6952832644b2a1386", "text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.", "title": "" }, { "docid": "ed8fef21796713aba1a6375a840c8ba3", "text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.", "title": "" }, { "docid": "87068ab038d08f9e1e386bc69ee8a5b2", "text": "The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this paper, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with homogeneous activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. In addition to models in the constructed RKHS, our stability analysis also applies to convolutional networks with generic activations such as rectified linear units, and we discuss its relationship with recent generalization bounds based on spectral norms.", "title": "" }, { "docid": "919dc4727575e2ce0419d31b03ddfbf3", "text": "In wireless ad hoc networks, although defense strategies such as intrusion detection systems (IDSs) can be deployed at each mobile node, significant constraints are imposed in terms of the energy expenditure of such systems. In this paper, we propose a game theoretic framework to analyze the interactions between pairs of attacking/defending nodes using a Bayesian formulation. We study the achievable Nash equilibrium for the attacker/defender game in both static and dynamic scenarios. The dynamic Bayesian game is a more realistic model, since it allows the defender to consistently update his belief on his opponent's maliciousness as the game evolves. A new Bayesian hybrid detection approach is suggested for the defender, in which a lightweight monitoring system is used to estimate his opponent's actions, and a heavyweight monitoring system acts as a last resort of defense. We show that the dynamic game produces energy-efficient monitoring strategies for the defender, while improving the overall hybrid detection power.", "title": "" }, { "docid": "6052c0f2adfe4b75f96c21a5ee128bf5", "text": "I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \\simulated tempering\", the \\tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the ineeciency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling eeciency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \\deceptive\".", "title": "" }, { "docid": "c76cfe38185146f60a416eedac962750", "text": "OBJECTIVE\nRepeated public inquiries into child abuse tragedies in Britain demonstrate the level of public concern about the services designed to protect children. These inquiries identify faults in professionals' practice but the similarities in their findings indicate that they are having insufficient impact on improving practice. This study is based on the hypothesis that the recurrent errors may be explicable as examples of the typical errors of human reasoning identified by psychological research.\n\n\nMETHODS\nThe sample comprised all child abuse inquiry reports published in Britain between 1973 and 1994 (45 in total). Using a content analysis and a framework derived from psychological research on reasoning, a study was made of the reasoning of the professionals involved and the findings of the inquiries.\n\n\nRESULTS\nIt was found that professionals based assessments of risk on a narrow range of evidence. It was biased towards the information readily available to them, overlooking significant data known to other professionals. The range was also biased towards the more memorable data, that is, towards evidence that was vivid, concrete, arousing emotion and either the first or last information received. The evidence was also often faulty, due, in the main, to biased or dishonest reporting or errors in communication. A critical attitude to evidence was found to correlate with whether or not the new information supported the existing view of the family. A major problem was that professionals were slow to revise their judgements despite a mounting body of evidence against them.\n\n\nCONCLUSIONS\nErrors in professional reasoning in child protection work are not random but predictable on the basis of research on how people intuitively simplify reasoning processes in making complex judgements. These errors can be reduced if people are aware of them and strive consciously to avoid them. Aids to reasoning need to be developed that recognize the central role of intuitive reasoning but offer methods for checking intuitive judgements more rigorously and systematically.", "title": "" }, { "docid": "634c134b1ec0c9fb985c93a63188308a", "text": "Automatic processing of metaphor can be clearly divided into two subtasks: metaphor recognition (distinguishing between literal and metaphorical language in a text) and metaphor interpretation (identifying the intended literal meaning of a metaphorical expression). Both of them have been repeatedly addressed in NLP. This paper is the first comprehensive and systematic review of the existing computational models of metaphor, the issues of metaphor annotation in corpora and the available resources.", "title": "" } ]
scidocsrr
640cf5fecf7f28e08f56e1bec62dd61c
MgNet: A Unified Framework of Multigrid and Convolutional Neural Network
[ { "docid": "e459bd355ea9a009e0d69c11e96d1173", "text": "Based on a natural connection between ResNet and transport equation or its characteristic equation, we propose a continuous flow model for both ResNet and plain net. Through this continuous model, a ResNet can be explicitly constructed as a refinement of a plain net. The flow model provides an alternative perspective to understand phenomena in deep neural networks, such as why it is necessary and sufficient to use 2-layer blocks in ResNets, why deeper is better, and why ResNets are even deeper, and so on. It also opens a gate to bring in more tools from the huge area of differential equations.", "title": "" }, { "docid": "9e11005f60aa3f53481ac3543a18f32f", "text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.", "title": "" }, { "docid": "1d2485f8a4e2a5a9f983bfee3e036b92", "text": "Partial differential equations (PDEs) are commonly derived based on empirical observations. However, recent advances of technology enable us to collect and store massive amount of data, which offers new opportunities for data-driven discovery of PDEs. In this paper, we propose a new deep neural network, called PDE-Net 2.0, to discover (time-dependent) PDEs from observed dynamic data with minor prior knowledge on the underlying mechanism that drives the dynamics. The design of PDE-Net 2.0 is based on our earlier work [1] where the original version of PDE-Net was proposed. PDE-Net 2.0 is a combination of numerical approximation of differential operators by convolutions and a symbolic multi-layer neural network for model recovery. Comparing with existing approaches, PDE-Net 2.0 has the most flexibility and expressive power by learning both differential operators and the nonlinear response function of the underlying PDE model. Numerical experiments show that the PDE-Net 2.0 has the potential to uncover the hidden PDE of the observed dynamics, and predict the dynamical behavior for a relatively long time, even in a noisy environment.", "title": "" } ]
[ { "docid": "1635b235c59cc57682735202c0bb2e0d", "text": "The introduction of structural imaging of the brain by computed tomography (CT) scans and magnetic resonance imaging (MRI) has further refined classification of head injury for prognostic, diagnosis, and treatment purposes. We describe a new classification scheme to be used both as a research and a clinical tool in association with other predictors of neurologic status.", "title": "" }, { "docid": "db6a91e0216440a4573aee6c78c78cbf", "text": "ObjectiveHeart rate monitoring using wrist type Photoplethysmographic (PPG) signals is getting popularity because of construction simplicity and low cost of wearable devices. The task becomes very difficult due to the presence of various motion artifacts. The objective is to develop algorithms to reduce the effect of motion artifacts and thus obtain accurate heart rate estimation. MethodsProposed heart rate estimation scheme utilizes both time and frequency domain analyses. Unlike conventional single stage adaptive filter, multi-stage cascaded adaptive filtering is introduced by using three channel accelerometer data to reduce the effect of motion artifacts. Both recursive least squares (RLS) and least mean squares (LMS) adaptive filters are tested. Moreover, singular spectrum analysis (SSA) is employed to obtain improved spectral peak tracking. The outputs from the filter block and SSA operation are logically combined and used for spectral domain heart rate estimation. Finally, a tracking algorithm is incorporated considering neighbouring estimates. ResultsThe proposed method provides an average absolute error of 1.16 beat per minute (BPM) with a standard deviation of 1.74 BPM while tested on publicly available database consisting of recordings from 12 subjects during physical activities. ConclusionIt is found that the proposed method provides consistently better heart rate estimation performance in comparison to that recently reported by TROIKA, JOSS and SPECTRAP methods. SignificanceThe proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach and thus feasible for implementing in wearable devices to monitor heart rate for fitness and clinical purpose.", "title": "" }, { "docid": "6bcc65065f9e1f52bbe0276b4a5d8a45", "text": "Urban mobility impacts urban life to a great extent. To enhance urban mobility, much research was invested in traveling time prediction: given an origin and destination, provide a passenger with an accurate estimation of how long a journey lasts. In this work, we investigate a novel combination of methods from Queueing Theory and Machine Learning in the prediction process. We propose a prediction engine that, given a scheduled bus journey (route) and a ‘source/destination’ pair, provides an estimate for the traveling time, while considering both historical data and real-time streams of information that are transmitted by buses. We propose a model that uses natural segmentation of the data according to bus stops and a set of predictors, some use learning while others are learning-free, to compute traveling time. Our empirical evaluation, using bus data that comes from the bus network in the city of Dublin, demonstrates that the snapshot principle, taken from Queueing Theory works well yet suffers from outliers. To overcome the outliers problem, we use machine learning techniques as a regulator that assists in identifying outliers and propose prediction based on historical data.", "title": "" }, { "docid": "ea6392b6a49ed40cb5e3779e0d1f3ea2", "text": "We see the world in scenes, where visual objects occur in rich surroundings, often embedded in a typical context with other related objects. How does the human brain analyse and use these common associations? This article reviews the knowledge that is available, proposes specific mechanisms for the contextual facilitation of object recognition, and highlights important open questions. Although much has already been revealed about the cognitive and cortical mechanisms that subserve recognition of individual objects, surprisingly little is known about the neural underpinnings of contextual analysis and scene perception. Building on previous findings, we now have the means to address the question of how the brain integrates individual elements to construct the visual experience.", "title": "" }, { "docid": "32acba3e072e0113759278c57ee2aee2", "text": "Software product lines (SPL) relying on UML technology have been a breakthrough in software reuse in the IT domain. In the industrial automation domain, SPL are not yet established in industrial practice. One reason for this is that conventional function block programming techniques do not adequately support SPL architecture definition and product configuration, while UML tools are not industrially accepted for control software development. In this paper, the use of object oriented (OO) extensions of IEC 61131–3 are used to bridge this gap. The SPL architecture and product specifications are expressed as UML class diagrams, which serve as straightforward specifications for configuring the IEC 61131–3 control application with OO extensions. A product configurator tool has been developed using PLCopen XML technology to support the generation of an executable IEC 61131–3 application according to chosen product options. The approach is demonstrated using a mobile elevating working platform as a case study.", "title": "" }, { "docid": "c7c63f08639660f935744309350ab1e0", "text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.", "title": "" }, { "docid": "3fa5544e35e021dcf64f882d79cf25fd", "text": "This article reviews methodological issues that arise in the application of exploratory factor analysis (EFA) to scale revision and refinement. The authors begin by discussing how the appropriate use of EFA in scale revision is influenced by both the hierarchical nature of psychological constructs and the motivations underlying the revision. Then they specifically address (a) important issues that arise prior to data collection (e.g., selecting an appropriate sample), (b) technical aspects of factor analysis (e.g., determining the number of factors to retain), and (c) procedures used to evaluate the outcome of the scale revision (e.g., determining whether the new measure functions equivalently for different populations).", "title": "" }, { "docid": "a71c53aed6a6805a5ebf0f69377411c0", "text": "We here illustrate a new indoor navigation system. It is an outcome of creativity, which merges an imaginative scenario and new technologies. The system intends to guide a person in unknown building by relying on technologies which do not depend on infrastructures. The system includes two key components, namely positioning and path planning. Positioning is based on geomagnetic fields, and it overcomes the several limits of WIFI and Bluetooth, etc. Path planning is based on a new and optimized Ant Colony algorithm, called Ant Colony Optimization (ACO), which offers better performances than the classic A* algorithms. The paper illustrates the logic and the architecture of the system, and also presents experimental results.", "title": "" }, { "docid": "1203822bf82dcd890e7a7a60fb282ce5", "text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016", "title": "" }, { "docid": "0bdb1d537011582c599a68f70881b274", "text": "This article examines the acquisition of vocational skills through apprenticeship-type situated learning. Findings from a studies of skilled workers revealed that learning processes that were consonant with the apprenticeship model of learning were highly valued as a means of acquiring and maintaining vocational skills. Supported by current research and theorising, this article, describes some conditions by which situated learning through apprenticeship can be utilised to develop vocational skills. These conditions include the nature of the activities learners engage in, the agency of the learning environment and mentoring role of experts. Conditions which may inhibit the effectiveness of an apprenticeship approach to learning are also addressed. The article concludes by suggesting that situated approaches to learning, such as the apprenticeship model may address problems of access to effective vocational skill development within the workforce.", "title": "" }, { "docid": "7b8fc04274ac8c01fd1619185ebe42c9", "text": "There are a few types of step-climbing wheelchairs in the world, but most of them are large and heavy because they are power-assisted. Therefore, they require large space to maneuver, which is not always feasible with existing house architectures. This study proposes a novel step-climbing wheelchair based on lever propulsion control using human upper limbs. The developed step-climbing wheelchair device consists of manual wheels with casters for moving around and a rotary-legs mechanism that is capable of climbing steps. The wheelchair also has a passive mechanism for posture transition to shift the center of gravity of the person between the desired positions for planar locomotion and step-climbing. The proposed design consists of passive parts, and this leads the wheelchair being compact and lightweight. In this paper, we present the design of this step-climbing wheelchair and some preliminary experiments to test its usability.", "title": "" }, { "docid": "50dd728b4157aefb7df35366f5822d0d", "text": "This paper describes iDriver, an iPhone software to remote control “Spirit of Berlin”. “Spirit of Berlin” is a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. iDriver is an iPhone application sending control packets to the car in order to remote control its steering wheel, gas and brake pedal, gear shift and turn signals. Additionally, a video stream from two top-mounted cameras is broadcasted back to the iPhone.", "title": "" }, { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" }, { "docid": "2afeb302ce217ead9d2d66d02460f9ff", "text": "The development of IoT technologies and the massive admiration and acceptance of social media tools and applications, new doors of opportunity have been opened for using data analytics in gaining meaningful insights from unstructured information. The application of opinion mining and sentiment analysis (OMSA) in the era of big data have been used a useful way in categorizing the opinion into different sentiment and in general evaluating the mood of the public. Moreover, different techniques of OMSA have been developed over the years in different data sets and applied to various experimental settings. In this regard, this paper presents a comprehensive systematic literature review, aims to discuss both technical aspect of OMSA (techniques and types) and non-technical aspect in the form of application areas are discussed. Furthermore, this paper also highlighted both technical aspects of OMSA in the form of challenges in the development of its technique and non-technical challenges mainly based on its application. These challenges are presented as a future direction for research.", "title": "" }, { "docid": "9cf9145a802c2093f7c6f5986aabb352", "text": "Although researchers have long studied using statistical modeling techniques to detect anomaly intrusion and profile user behavior, the feasibility of applying multinomial logistic regression modeling to predict multi-attack types has not been addressed, and the risk factors associated with individual major attacks remain unclear. To address the gaps, this study used the KDD-cup 1999 data and bootstrap simulation method to fit 3000 multinomial logistic regression models with the most frequent attack types (probe, DoS, U2R, and R2L) as an unordered independent variable, and identified 13 risk factors that are statistically significantly associated with these attacks. These risk factors were then used to construct a final multinomial model that had an ROC area of 0.99 for detecting abnormal events. Compared with the top KDD-cup 1999 winning results that were based on a rule-based decision tree algorithm, the multinomial logistic model-based classification results had similar sensitivity values in detecting normal and a significantly lower overall misclassification rate (18.9% vs. 35.7%). The study emphasizes that the multinomial logistic regression modeling technique with the 13 risk factors provides a robust approach to detect anomaly intrusion.", "title": "" }, { "docid": "7e8a161ba96ef2f36818479023ad0551", "text": "Computational thinking (CT) is being located at the focus of educational innovation, as a set of problemsolving skills that must be acquired by the new generations of students to thrive in a digital world full of objects driven by software. However, there is still no consensus on a CT definition or how to measure it. In response, we attempt to address both issues from a psychometric approach. On the one hand, a Computational Thinking Test (CTt) is administered on a sample of 1,251 Spanish students from 5th to 10th grade, so its descriptive statistics and reliability are reported in this paper. On the second hand, the criterion validity of the CTt is studied with respect to other standardized psychological tests: the Primary Mental Abilities (PMA) battery, and the RP30 problem-solving test. Thus, it is intended to provide a new instrument for CT measurement and additionally give evidence of the nature of CT through its associations with key related psychological constructs. Results show statistically significant correlations at least moderately intense between CT and: spatial ability (r 1⁄4 0.44), reasoning ability (r 1⁄4 0.44), and problemsolving ability (r 1⁄4 0.67). These results are consistent with recent theoretical proposals linking CT to some components of the Cattel-Horn-Carroll (CHC) model of intelligence, and corroborate the conceptualization of CT as a problem-solving ability. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f70b6d0a0b315a1ca87ccf5184c43da4", "text": "Transmitting secret information through internet requires more security because of interception and improper manipulation by eavesdropper. One of the most desirable explications of this is “Steganography”. This paper proposes a technique of steganography using Advanced Encryption Standard (AES) with secured hash function in the blue channel of image. The embedding system is done by dynamic bit adjusting system in blue channel of RGB images. It embeds message bits to deeper into the image intensity which is very difficult for any type improper manipulation of hackers. Before embedding text is encrypted using AES with a hash function. For extraction the cipher text bit is found from image intensity using the bit adjusting extraction algorithm and then it is decrypted by AES with same hash function to get the real secret text. The proposed approach is better in Pick Signal to Noise Ratio (PSNR) value and less in histogram error between stego images and cover images than some existing systems. KeywordsAES-128, SHA-512, Cover Image, Stego image, Bit Adjusting, Blue Channel", "title": "" }, { "docid": "0ce4a0dfe5ea87fb87f5d39b13196e94", "text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.", "title": "" }, { "docid": "4229e2db880628ea2f0922a94c30efe0", "text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.", "title": "" } ]
scidocsrr
133e6c414ef8cfb4ad5096082e2cf8d2
5G Backhaul Challenges and Emerging Research Directions: A Survey
[ { "docid": "121f1baeaba51ebfdfc69dde5cd06ce3", "text": "Mobile operators are facing an exponential traffic growth due to the proliferation of portable devices that require a high-capacity connectivity. This, in turn, leads to a tremendous increase of the energy consumption of wireless access networks. A promising solution to this problem is the concept of heterogeneous networks, which is based on the dense deployment of low-cost and low-power base stations, in addition to the traditional macro cells. However, in such a scenario the energy consumed by the backhaul, which aggregates the traffic from each base station towards the metro/core segment, becomes significant and may limit the advantages of heterogeneous network deployments. This paper aims at assessing the impact of backhaul on the energy consumption of wireless access networks, taking into consideration different data traffic requirements (i.e., from todays to 2020 traffic levels). Three backhaul architectures combining different technologies (i.e., copper, fiber, and microwave) are considered. Results show that backhaul can amount to up to 50% of the power consumption of a wireless access network. On the other hand, hybrid backhaul architectures that combines fiber and microwave performs relatively well in scenarios where the wireless network is characterized by a high small-base-stations penetration rate.", "title": "" }, { "docid": "471c52fca57c672267ef69e3e3db9cd9", "text": "This paper presents the approach of extending cellular networks with millimeter-wave backhaul and access links. Introducing a logical split between control and user plane will permit full coverage while seamlessly achieving very high data rates in the vicinity of mm-wave small cells.", "title": "" } ]
[ { "docid": "0d18f41db76330c5d9cdceb268ca3434", "text": "A Low-power convolutional neural network (CNN)-based face recognition system is proposed for the user authentication in smart devices. The system consists of two chips: an always-on CMOS image sensor (CIS)-based face detector (FD) and a low-power CNN processor. For always-on FD, analog–digital Hybrid Haar-like FD is proposed to improve the energy efficiency of FD by 39%. For low-power CNN processing, the CNN processor with 1024 MAC units and 8192-bit-wide local distributed memory operates at near threshold voltage, 0.46 V with 5-MHz clock frequency. In addition, the separable filter approximation is adopted for the workload reduction of CNN, and transpose-read SRAM using 7T SRAM cell is proposed to reduce the activity factor of the data read operation. Implemented in 65-nm CMOS technology, the <inline-formula> <tex-math notation=\"LaTeX\">$3.30 \\times 3.36$ </tex-math></inline-formula> mm<sup>2</sup> CIS chip and the <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> mm<sup>2</sup> CNN processor consume 0.62 mW to evaluate one face at 1 fps and achieved 97% accuracy in LFW dataset.", "title": "" }, { "docid": "e58036f93195603cb7dc7265b9adeb25", "text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.", "title": "" }, { "docid": "76c19c70f11244be16248a1b4de2355a", "text": "We have recently witnessed the emerging of cloud computing on one hand and robotics platforms on the other hand. Naturally, these two visions have been merging to give birth to the Cloud Robotics paradigm in order to offer even more remote services. But such a vision is still in its infancy. Architectures and platforms are still to be defined to efficiently program robots so they can provide different services, in a standardized way masking their heterogeneity. This paper introduces Open Mobile Cloud Robotics Interface (OMCRI), a Robot-as-a-Service vision based platform, which offers a unified easy access to remote heterogeneous mobile robots. OMCRI encompasses an extension of the Open Cloud Computing Interface (OCCI) standard and a gateway hosting mobile robot resources. We then provide an implementation of OMCRI based on the open source model-driven Eclipse-based OCCIware tool chain and illustrates its use for three off-the-shelf mobile robots: Lego Mindstorm NXT, Turtlebot, and Parrot AR. Drone.", "title": "" }, { "docid": "c5dfef21843d2cc1893ec1dc88787050", "text": "Automatic synthesis of faces from visual attributes is an important problem in computer vision and has wide applications in law enforcement and entertainment. With the advent of deep generative convolutional neural networks (CNNs), attempts have been made to synthesize face images from attributes and text descriptions. In this paper, we take a different approach, where we formulate the original problem as a stage-wise learning problem. We first synthesize the facial sketch corresponding to the visual attributes and then we reconstruct the face image based on the synthesized sketch. The proposed Attribute2Sketch2Face framework, which is based on a combination of deep Conditional Variational Autoencoder (CVAE) and Generative Adversarial Networks (GANs), consists of three stages: (1) Synthesis of facial sketch from attributes using a CVAE architecture, (2) Enhancement of coarse sketches to produce sharper sketches using a GANbased framework, and (3) Synthesis of face from sketch using another GAN-based network. Extensive experiments and comparison with recent methods are performed to verify the effectiveness of the proposed attributebased three stage face synthesis method.", "title": "" }, { "docid": "7f7e0c3982ca660f5b4f7f22584576a5", "text": "Cooperation and competition (jointly called “coopetition”) are two modes of interactions among a set of concurrent topics on social media. How do topics cooperate or compete with each other to gain public attention? Which topics tend to cooperate or compete with one another? Who plays the key role in coopetition-related interactions? We answer these intricate questions by proposing a visual analytics system that facilitates the in-depth analysis of topic coopetition on social media. We model the complex interactions among topics as a combination of carry-over, coopetition recruitment, and coopetition distraction effects. This model provides a close functional approximation of the coopetition process by depicting how different groups of influential users (i.e., “topic leaders”) affect coopetition. We also design EvoRiver, a time-based visualization, that allows users to explore coopetition-related interactions and to detect dynamically evolving patterns, as well as their major causes. We test our model and demonstrate the usefulness of our system based on two Twitter data sets (social topics data and business topics data).", "title": "" }, { "docid": "8ccb8ba140fedc1eba8e97f3b7721373", "text": "This paper describes mathematical and software developments for a suite of programs for solving ordinary differential equations in Matlab.", "title": "" }, { "docid": "fb80c27ab2615373a316605082adadbb", "text": "The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques. 2008 Published by Elsevier Inc.", "title": "" }, { "docid": "609fa8716f97a1d30683997d778e4279", "text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.", "title": "" }, { "docid": "6c889bf25b3d4c1bd87b26c03c8b652c", "text": "With popular microblogging services like Twitter, users are able to online share their real-time feelings in a more convenient way. The user generated data in Twitter is thus regarded as a resource providing individuals' spontaneous emotional information, and has attracted much attention of researchers. Prior work has measured the emotional expressions in users' tweets and then performed various analysis and learning. However, how to utilize those learned knowledge from the observed tweets and the context information to predict users' opinions toward specific topics they had not directly given yet, is a novel problem presenting both challenges and opportunities. In this paper, we mainly focus on solving this problem with a Social context and Topical context incorporated Matrix Factorization (ScTcMF) framework. The experimental results on a real-world Twitter data set show that this framework outperforms the state-of-the-art collaborative filtering methods, and demonstrate that both social context and topical context are effective in improving the user-topic opinion prediction performance.", "title": "" }, { "docid": "c62742c65b105a83fa756af9b1a45a37", "text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.", "title": "" }, { "docid": "50b0ecff19de467ab8558134fb666a87", "text": "Real-time video objects detection, tracking, and recognition are challenging issues due to the real-time processing requirements of the machine learning algorithms. In recent years, video processing is performed by deep learning (DL) based techniques that achieve higher accuracy but require higher computations cost. This paper presents a recent survey of the state-of-the-art DL platforms and architectures used for deep vision systems. It highlights the contributions and challenges from over numerous research studies. In particular, this paper first describes the architecture of various DL models such as AutoEncoders, deep Boltzmann machines, convolution neural networks, recurrent neural networks and deep residual learning. Next, deep real-time video objects detection, tracking and recognition studies are highlighted to illustrate the key trends in terms of cost of computation, number of layers and the accuracy of results. Finally, the paper discusses the challenges of applying DL for real-time video processing and draw some directions for the future of DL algorithms.", "title": "" }, { "docid": "f6189455184135dfeff9cb2a85b9fef0", "text": "Precise, successful in desire target, strong healthy and self loading image registration is critical task in the field of computer vision. The most require key steps of image alignment/ registration are: Feature matching, Feature detection, , derivation of transformation function based on corresponding features in images and reconstruction of images based on derived transformation function. This is also the aim of computer vision in many applications to achieve an optimal and accurate image, which depends on optimal features matching and detection. The investigation of this paper summarize the coincidence among five different methods for robust features/interest points (or landmarks) detector and indentify images which are (FAST), Speed Up Robust Features (SURF), (Eigen),( Harris) & Maximally Stable Extremal Regions ( MSER). This paper also focuses on the unique extraction from the images which can be used to perform good matching on different views of the images/objects/scenes.", "title": "" }, { "docid": "e36e26f084c0f589e5d36bb2103106ff", "text": "Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers [11, 33] have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters [11] and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.", "title": "" }, { "docid": "4fb93b393abac7cf7da9799a01fa9bab", "text": "The goal of text summarization is to reduce the size of the text while preserving its important information and overall meaning. With the availability of internet, data is growing leaps and bounds and it is practically impossible summarizing all this data manually. Automatic summarization can be classified as extractive and abstractive summarization. For abstractive summarization we need to understand the meaning of the text and then create a shorter version which best expresses the meaning, While in extractive summarization we select sentences from given data itself which contains maximum information and fuse those sentences to create an extractive summary. In this paper we tested all possible combinations of seven features and then reported the best one for particular document. We analyzed the results for all 10 documents taken from DUC 2002 dataset using ROUGE evaluation matrices.", "title": "" }, { "docid": "7aed9eeb7a8e922f5ffc0e920dbaeb1e", "text": "In 3 prior meta-analyses, the relationship between the Big Five factors of personality and job criteria was investigated. However, these meta-analyses showed different findings. Furthermore, these reviews included studies carried out only in the United States and Canada. This study reports meta-analytic research on the same topic but with studies conducted in the European Community, which were not included in the prior reviews. The results indicate that Conscientiousness and Emotional Stability are valid predictors across job criteria and occupational groups. The remaining factors are valid only for some criteria and for some occupational groups. Extraversion was a predictor for 2 occupations, and Openness and Agreeableness were valid predictors of training proficiency. These findings are consistent with M.R. Barrick and M.K. Mount (1991) and L.M. Hough, N.K. Eaton, M.D. Dunnette, J.D. Kamp, and R.A. McCloy (1990). Implications of the results for future research and the practice of personnel selection are suggested.", "title": "" }, { "docid": "d4075ad1c75e73c8e38bc139ecacac27", "text": "Manifold bootstrapping is a new method for data-driven modeling of real-world, spatially-varying reflectance, based on the idea that reflectance over a given material sample forms a low-dimensional manifold. It provides a high-resolution result in both the spatial and angular domains by decomposing reflectance measurement into two lower-dimensional phases. The first acquires representatives of high angular dimension but sampled sparsely over the surface, while the second acquires keys of low angular dimension but sampled densely over the surface.\n We develop a hand-held, high-speed BRDF capturing device for phase one measurements. A condenser-based optical setup collects a dense hemisphere of rays emanating from a single point on the target sample as it is manually scanned over it, yielding 10 BRDF point measurements per second. Lighting directions from 6 LEDs are applied at each measurement; these are amplified to a full 4D BRDF using the general (NDF-tabulated) microfacet model. The second phase captures N=20-200 images of the entire sample from a fixed view and lit by a varying area source. We show that the resulting N-dimensional keys capture much of the distance information in the original BRDF space, so that they effectively discriminate among representatives, though they lack sufficient angular detail to reconstruct the SVBRDF by themselves. At each surface position, a local linear combination of a small number of neighboring representatives is computed to match each key, yielding a high-resolution SVBRDF. A quick capture session (10-20 minutes) on simple devices yields results showing sharp and anisotropic specularity and rich spatial detail.", "title": "" }, { "docid": "ff0c99e547d41fbc71ba1d4ac4a17411", "text": "Measuring similarities between unlabeled time series trajectories is an important problem in domains as diverse as medicine, astronomy, finance, and computer vision. It is often unclear what is the appropriate metric to use because of the complex nature of noise in the trajectories (e.g. different sampling rates or outliers). Domain experts typically hand-craft or manually select a specific metric, such as dynamic time warping (DTW), to apply on their data. In this paper, we propose Autowarp, an end-to-end algorithm that optimizes and learns a good metric given unlabeled trajectories. We define a flexible and differentiable family of warping metrics, which encompasses common metrics such as DTW, Euclidean, and edit distance. Autowarp then leverages the representation power of sequence autoencoders to optimize for a member of this warping distance family. The output is a metric which is easy to interpret and can be robustly learned from relatively few trajectories. In systematic experiments across different domains, we show that Autowarp often outperforms hand-crafted trajectory similarity metrics.", "title": "" }, { "docid": "7f8211ed8d7c8145f370c46b5bba3ddb", "text": "The adjectives of quantity (Q-adjectives) many, few, much and little stand out from other quantity expressions on account of their syntactic flexibility, occurring in positions that could be called quantificational (many students attended), predicative (John’s friends were many), attributive (the many students), differential (much more than a liter) and adverbial (slept too much). This broad distribution poses a challenge for the two leading theories of this class, which treat them as either quantifying determiners or predicates over individuals. This paper develops an analysis of Q-adjectives as gradable predicates of sets of degrees or (equivalently) gradable quantifiers over degrees. It is shown that this proposal allows a unified analysis of these items across the positions in which they occur, while also overcoming several issues facing competing accounts, among others the divergences between Q-adjectives and ‘ordinary’ adjectives, the operator-like behavior of few and little, and the use of much as a dummy element. Overall the findings point to the central role of degrees in the semantics of quantity.", "title": "" }, { "docid": "83cea367e54cfe92718742cacbd61adf", "text": "We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the networks process and classify text. We examine common hypotheses to this problem: that filters, accompanied by global max-pooling, serve as ngram detectors. We show that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important ngrams from the rest. Finally, we show practical use cases derived from our findings in the form of model interpretability (explaining a trained model by deriving a concrete identity for each filter, bridging the gap between visualization tools in vision tasks and NLP) and prediction interpretability (explaining predictions).", "title": "" }, { "docid": "82bcf95fc94ba1369c6ec1c64f55b2ec", "text": "In this paper, we are interested in modeling complex activities that occur in a typical household. We propose to use programs, i.e., sequences of atomic actions and interactions, as a high level representation of complex tasks. Programs are interesting because they provide a non-ambiguous representation of a task, and allow agents to execute them. However, nowadays, there is no database providing this type of information. Towards this goal, we first crowd-source programs for a variety of activities that happen in people's homes, via a game-like interface used for teaching kids how to code. Using the collected dataset, we show how we can learn to extract programs directly from natural language descriptions or from videos. We then implement the most common atomic (inter)actions in the Unity3D game engine, and use our programs to \"drive\" an artificial agent to execute tasks in a simulated household environment. Our VirtualHome simulator allows us to create a large activity video dataset with rich ground-truth, enabling training and testing of video understanding models. We further showcase examples of our agent performing tasks in our VirtualHome based on language descriptions.", "title": "" } ]
scidocsrr
1d2655ff7197191d88dcd901e081171c
Security Assessment of Code Obfuscation Based on Dynamic Monitoring in Android Things
[ { "docid": "529e132a37f9fb37ddf04984236f4b36", "text": "The first steps in analyzing defensive malware are understanding what obfuscations are present in real-world malware binaries, how these obfuscations hinder analysis, and how they can be overcome. While some obfuscations have been reported independently, this survey consolidates the discussion while adding substantial depth and breadth to it. This survey also quantifies the relative prevalence of these obfuscations by using the Dyninst binary analysis and instrumentation tool that was recently extended for defensive malware analysis. The goal of this survey is to encourage analysts to focus on resolving the obfuscations that are most prevalent in real-world malware.", "title": "" }, { "docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24", "text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.", "title": "" } ]
[ { "docid": "f9b6662dc19c47892bb7b95c5b7dc181", "text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.", "title": "" }, { "docid": "149595fcd31fd2ddbf7c6d48ca6339dc", "text": "What factors underlie the adoption dynamics of ecommerce technologies among users in developing countries? Even though the internet promised to be the great equalizer, the nuanced variety of conditions and contingencies that shape user adoption of ecommerce technologies has received little scrutiny. Building on previous research on technology adoption, the paper proposes a global information technology (IT) adoption model. The model includes antecedents of performance expectancy, social influence, and technology opportunism and investigates the crucial influence of facilitating conditions. The proposed model is tested using data from 172 technology users from 37 countries, collected over a 1-year period. The findings suggest that in developing countries, facilitating conditions play a critical moderating role in understanding actual ecommerce adoption, especially when in tandem with technological opportunism. Altogether, the paper offers a preliminary scrutiny of the mechanics of ecommerce adoption in developing countries.", "title": "" }, { "docid": "7eec9c40d8137670a88992d40ef52101", "text": "Nowadays, most nurses, pre- and post-qualification, will be required to undertake a literature review at some point, either as part of a course of study, as a key step in the research process, or as part of clinical practice development or policy. For student nurses and novice researchers it is often seen as a difficult undertaking. It demands a complex range of skills, such as learning how to define topics for exploration, acquiring skills of literature searching and retrieval, developing the ability to analyse and synthesize data as well as becoming adept at writing and reporting, often within a limited time scale. The purpose of this article is to present a step-by-step guide to facilitate understanding by presenting the critical elements of the literature review process. While reference is made to different types of literature reviews, the focus is on the traditional or narrative review that is undertaken, usually either as an academic assignment or part of the research process.", "title": "" }, { "docid": "628c8b906e3db854ea92c021bb274a61", "text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.", "title": "" }, { "docid": "a3d95604c143f1cd511fd62fe62bb4f4", "text": "We propose a new method for unconstrained optimization of a s mooth and strongly convex function, which attains the optimal rate of convergence of N esterov’s accelerated gradient descent. The new algorithm has a simple geometric interpret ation, loosely inspired by the ellipsoid method. We provide some numerical evidence that t he new method can be superior to Nesterov’s accelerated gradient descent.", "title": "" }, { "docid": "6f0f6bf051ff36907b3184501cecbf19", "text": "American divorce rates rose from the 1950s to the 1970s, peaked around 1980, and have fallen ever since. The mean age at marriage also substantially increased after 1970. Using data from the Survey of Income and Program Participation, 1979 National Longitudinal Survey of Youth, and National Survey of Family Growth, I explore the extent to which the rise in age at marriage can explain the rapid decrease in divorce rates for cohorts marrying after 1980. Three different empirical approaches all suggest that the increase in women’s age at marriage was the main proximate cause of the fall in divorce. ∗Email: drotz@mathematica-mpr.com. I would like to thank Roland Fryer, Claudia Goldin, and Larry Katz for continued guidance and support on this project, as well as Timothy Bond, Richard Freeman, Stephanie Hurder, Jeff Liebman, Claudia Olivetti, Amanda Pallais, Laszlo Sandor, Emily Glassberg Sands, Alessandra Voena, Justin Wolfers, and seminar participants at Case Western Reserve University, Harvard University, Mathematica Policy Research, UCLA, University of Arizona, University of Illinois-Chicago, University of Iowa, University of Texas-Austin, and the US Census Bureau for helpful comments and discussions. I am also grateful to Larry Katz and Phillip Levine for providing data on oral contraceptive pill access and abortion rates respectively. All remaining errors are my own. This research has been supported by the NSF-IGERT program, \"Multidisciplinary Program in Inequality and Social Policy\" at Harvard University (Grant No. 0333403). The views expressed herein are those of the author and not necessarily those of Mathematica Policy Research.", "title": "" }, { "docid": "5dec9381369e61c30112bd87a044cb2f", "text": "A limiting factor for the application of IDA methods in many domains is the incompleteness of data repositories. Many records have fields that are not filled in, especially, when data entry is manual. In addition, a significant fraction of the entries can be erroneous and there may be no alternative but to discard these records. But every cell in a database is not an independent datum. Statistical relationships will constrain and, often determine, missing values. Data imputation, the filling in of missing values for partially missing data, can thus be an invaluable first step in many IDA projects. New imputation methods that can handle the large-scale problems and large-scale sparsity of industrial databases are needed. To illustrate the incomplete database problem, we analyze one database with instrumentation maintenance and test records for an industrial process. Despite regulatory requirements for process data collection, this database is less than 50% complete. Next, we discuss possible solutions to the missing data problem. Several approaches to imputation are noted and classified into two categories: data-driven and model-based. We then describe two machine-learning-based approaches that we have worked with. These build upon well-known algorithms: AutoClass and C4.5. Several experiments are designed, all using the maintenance database as a common test-bed but with various data splits and algorithmic variations. Results are generally positive with up to 80% accuracies of imputation. We conclude the paper by outlining some considerations in selecting imputation methods, and by discussing applications of data imputation for intelligent data analysis.", "title": "" }, { "docid": "93d498adaee9070ffd608c5c1fe8e8c9", "text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.", "title": "" }, { "docid": "5a69b2301b95976ee29138092fc3bb1a", "text": "We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.", "title": "" }, { "docid": "f4b5577175cc87aab052a581081811f0", "text": "This study intends to report a review of the literature on the evolution of the systems information success model, specifically the DeLone & McLean model (1992) during the last twenty-five years. It is also intended to refer the main critics to the model by the various researchers who contributed to its updating, making it one of the most used until today.", "title": "" }, { "docid": "82c37d40a58749aaf75cff5b90eed966", "text": "The input-output mapping defined by Eq. 1 of the main manuscript is differentiable with respect to both input functions, o(x), c(x), and as such lends itself to end-to-end training with back-propagation. Given a gradient signal δ (·)= ∂L ∂m(·) that dictates how the output layer activations should change to decrease the loss L, we obtain the update equations for c(·) and o(·)=(ox(·),oy(·)) through the following chain rule:", "title": "" }, { "docid": "1f18623625304f7c47ca144c8acf4bc9", "text": "Deep neural networks (DNNs) are known to be vulnerable to adversarial perturbations, which imposes a serious threat to DNN-based decision systems. In this paper, we propose to apply the lossy Saak transform to adversarially perturbed images as a preprocessing tool to defend against adversarial attacks. Saak transform is a recently-proposed state-of-the-art for computing the spatial-spectral representations of input images. Empirically, we observe that outputs of the Saak transform are very discriminative in differentiating adversarial examples from clean ones. Therefore, we propose a Saak transform based preprocessing method with three steps: 1) transforming an input image to a joint spatial-spectral representation via the forward Saak transform, 2) apply filtering to its high-frequency components, and, 3) reconstructing the image via the inverse Saak transform. The processed image is found to be robust against adversarial perturbations. We conduct extensive experiments to investigate various settings of the Saak transform and filtering functions. Without harming the decision performance on clean images, our method outperforms state-of-the-art adversarial defense methods by a substantial margin on both the CIFAR10 and ImageNet datasets. Importantly, our results suggest that adversarial perturbations can be effectively and efficiently defended using state-of-the-art frequency analysis.", "title": "" }, { "docid": "87f5ed217015a5b9590290fe80278527", "text": "Probabilistic topic models are widely used in different contexts to uncover the hidden structure in large text corpora. One of the main (and perhaps strong) assumption of these models is that generative process follows a bag-of-words assumption, i.e. each token is independent from the previous one. We extend the popular Latent Dirichlet Allocation model by exploiting three different conditional Markovian assumptions: (i) the token generation depends on the current topic and on the previous token; (ii) the topic associated with each observation depends on topic associated with the previous one; (iii) the token generation depends on the current and previous topic. For each of these modeling assumptions we present a Gibbs Sampling procedure for parameter estimation. Experimental evaluation over real-word data shows the performance advantages, in terms of recall and precision, of the sequence-modeling approaches.", "title": "" }, { "docid": "328db3cbbf53bd26ea8b1cb8d1c197be", "text": "BACKGROUND\nNarcolepsy with cataplexy is associated with a loss of orexin/hypocretin. It is speculated that an autoimmune process kills the orexin-producing neurons, but these cells may survive yet fail to produce orexin.\n\n\nOBJECTIVE\nTo examine whether other markers of the orexin neurons are lost in narcolepsy with cataplexy.\n\n\nMETHODS\nWe used immunohistochemistry and in situ hybridization to examine the expression of orexin, neuronal activity-regulated pentraxin (NARP), and prodynorphin in hypothalami from five control and two narcoleptic individuals.\n\n\nRESULTS\nIn the control hypothalami, at least 80% of the orexin-producing neurons also contained prodynorphin mRNA and NARP. In the patients with narcolepsy, the number of cells producing these markers was reduced to about 5 to 10% of normal.\n\n\nCONCLUSIONS\nNarcolepsy with cataplexy is likely caused by a loss of the orexin-producing neurons. In addition, loss of dynorphin and neuronal activity-regulated pentraxin may contribute to the symptoms of narcolepsy.", "title": "" }, { "docid": "673e1ec63a0e84cf3fbf450928d89905", "text": "This study proposed an IoT (Internet of Things) system for the monitoring and control of the aquaculture platform. The proposed system is network surveillance combined with mobile devices and a remote platform to collect real-time farm environmental information. The real-time data is captured and displayed via ZigBee wireless transmission signal transmitter to remote computer terminals. This study permits real-time observation and control of aquaculture platform with dissolved oxygen sensors, temperature sensing elements using A/D and microcontrollers signal conversion. The proposed system will use municipal electricity coupled with a battery power source to provide power with battery intervention if municipal power is interrupted. This study is to make the best fusion value of multi-odometer measurement data for optimization via the maximum likelihood estimation (MLE).Finally, this paper have good efficient and precise computing in the experimental results.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "7074d77d242b4d1ecbebc038c04698b8", "text": "We discuss our tools and techniques to monitor and inject packets in Bluetooth Low Energy. Also known as BTLE or Bluetooth Smart, it is found in recent high-end smartphones, sports devices, sensors, and will soon appear in many medical devices. We show that we can effectively render useless the encryption of any Bluetooth Low Energy link.", "title": "" }, { "docid": "83a4a89d3819009d61123a146b38d0e9", "text": "OBJECTIVE\nBehçet's disease (BD) is a chronic, relapsing, inflammatory vascular disease with no pathognomonic test. Low sensitivity of the currently applied International Study Group (ISG) clinical diagnostic criteria led to their reassessment.\n\n\nMETHODS\nAn International Team for the Revision of the International Criteria for BD (from 27 countries) submitted data from 2556 clinically diagnosed BD patients and 1163 controls with BD-mimicking diseases or presenting at least one major BD sign. These were randomly divided into training and validation sets. Logistic regression, 'leave-one-country-out' cross-validation and clinical judgement were employed to develop new International Criteria for BD (ICBD) with the training data. Existing and new criteria were tested for their performance in the validation set.\n\n\nRESULTS\nFor the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations 1 point each. The pathergy test, when used, was assigned 1 point. A patient scoring ≥4 points is classified as having BD. In the training set, 93.9% sensitivity and 92.1% specificity were assessed compared with 81.2% sensitivity and 95.9% specificity for the ISG criteria. In the validation set, ICBD demonstrated an unbiased estimate of sensitivity of 94.8% (95% CI: 93.4-95.9%), considerably higher than that of the ISG criteria (85.0%). Specificity (90.5%, 95% CI: 87.9-92.8%) was lower than that of the ISG-criteria (96.0%), yet still reasonably high. For countries with at least 90%-of-cases and controls having a pathergy test, adding 1 point for pathergy test increased the estimate of sensitivity from 95.5% to 98.5%, while barely reducing specificity from 92.1% to 91.6%.\n\n\nCONCLUSION\nThe new proposed criteria derived from multinational data exhibits much improved sensitivity over the ISG criteria while maintaining reasonable specificity. It is proposed that the ICBD criteria to be adopted both as a guide for diagnosis and classification of BD.", "title": "" }, { "docid": "c40f1282c12a9acee876d127dffbd733", "text": "Online markets pose a difficulty for evaluating products, particularly experience goods, such as used cars, that cannot be easily described online. This exacerbates product uncertainty, the buyer’s difficulty in evaluating product characteristics, and predicting how a product will perform in the future. However, the IS literature has focused on seller uncertainty and ignored product uncertainty. To address this void, this study conceptualizes product uncertainty and examines its effects and antecedents in online markets for used cars (eBay Motors).", "title": "" } ]
scidocsrr
9b46d8b998dcaec1f2d5cebb6b5ff4bb
Light scattering from human hair fibers
[ { "docid": "7f66cfc591970b3e90c54223cf8cf160", "text": "A reflection and refraction model for anisotropic surfaces is introduced. The anisotropy is simulated by small cylinders (added or subtracted) distributed on the anisotropic surface. Different levels of anisotropy are achieved by varying the distance between each cylinder and/or rising the cylinders more or less from the surface. Multidirectional anisotropy is modelled by orienting groups of cylinders in different direction. The intensity of the reflected light is computed by determining the visible and illuminated portion of the cylinders, taking self-blocking into account. We present two techniques to compute this in practice. In one the intensity is computed by sampling the surface of the cylinders. The other is an analytic solution. In the case of the diffuse component, the solution is exact. In the case of the specular component, an approximation is developed using a Chebyshev polynomial approximation of the specular term, and integrating the polynomial.This model can be implemented easily within most rendering system, given a suitable mechanism to define and alter surface tangents. The effectiveness of the model and the visual importance of anisotropy are illustrated with some pictures.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "e066f0670583195b9ad2f3c888af1dd2", "text": "Deep learning has received much attention as of the most powerful approaches for multimodal representation learning in recent years. An ideal model for multimodal data can reason about missing modalities using the available ones, and usually provides more information when multiple modalities are being considered. All the previous deep models contain separate modality-specific networks and find a shared representation on top of those networks. Therefore, they only consider high level interactions between modalities to find a joint representation for them. In this paper, we propose a multimodal deep learning framework (MDLCW) that exploits the cross weights between representation of modalities, and try to gradually learn interactions of the modalities in a deep network manner (from low to high level interactions). Moreover, we theoretically show that considering these interactions provide more intra-modality information, and introduce a multi-stage pre-training method that is based on the properties of multi-modal data. In the proposed framework, as opposed to the existing deep methods for multi-modal data, we try to reconstruct the representation of each modality at a given level, with representation of other modalities in the previous layer. Extensive experimental results show that the proposed model outperforms state-of-the-art information retrieval methods for both image and text queries on the PASCAL-sentence and SUN-Attribute databases.", "title": "" }, { "docid": "9422f8c85859aca10e7d2a673b0377ba", "text": "Many adolescents are experiencing a reduction in sleep as a consequence of a variety of behavioral factors (e.g., academic workload, social and employment opportunities), even though scientific evidence suggests that the biological need for sleep increases during maturation. Consequently, the ability to effectively interact with peers while learning and processing novel information may be diminished in many sleepdeprived adolescents. Furthermore, sleep deprivation may account for reductions in cognitive efficiency in many children and adolescents with special education needs. In response to recognition of this potential problem by parents, educators, and scientists, some school districts have implemented delayed bus schedules and school start times to allow for increased sleep duration for high school students, in an effort to increase academic performance and decrease behavioral problems. The long-term effects of this change are yet to be determined; however, preliminary studies suggest that the short-term impact on learning and behavior has been beneficial. Thus, many parents, teachers, and scientists are supporting further consideration of this information to formulate policies that may maximize learning and developmental opportunities for children. Although changing school start times may be an effective method to combat sleep deprivation in most adolescents, some adolescents experience sleep deprivation and consequent diminished daytime performance because of common underlying sleep disorders (e.g., asthma or sleep apnea). In such cases, surgical, pharmaceutical, or respiratory therapy, or a combination of the three, interventions are required to restore normal sleep and daytime performance.", "title": "" }, { "docid": "18df6df67ced4564b3873d487a25f2d9", "text": "The past few years have seen a dramatic increase in the performance of recognition systems thanks to the introduction of deep networks for representation learning. However, the mathematical reasons for this success remain elusive. A key issue is that the neural network training problem is nonconvex, hence optimization algorithms may not return a global minima. This paper provides sufficient conditions to guarantee that local minima are globally optimal and that a local descent strategy can reach a global minima from any initialization. Our conditions require both the network output and the regularization to be positively homogeneous functions of the network parameters, with the regularization being designed to control the network size. Our results apply to networks with one hidden layer, where size is measured by the number of neurons in the hidden layer, and multiple deep subnetworks connected in parallel, where size is measured by the number of subnetworks.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "fcef7ce729a08a5b8c6ed1d0f2d53633", "text": "Community question-answering (CQA) systems, such as Yahoo! Answers or Stack Overflow, belong to a prominent group of successful and popular Web 2.0 applications, which are used every day by millions of users to find an answer on complex, subjective, or context-dependent questions. In order to obtain answers effectively, CQA systems should optimally harness collective intelligence of the whole online community, which will be impossible without appropriate collaboration support provided by information technologies. Therefore, CQA became an interesting and promising subject of research in computer science and now we can gather the results of 10 years of research. Nevertheless, in spite of the increasing number of publications emerging each year, so far the research on CQA systems has missed a comprehensive state-of-the-art survey. We attempt to fill this gap by a review of 265 articles published between 2005 and 2014, which were selected from major conferences and journals. According to this evaluation, at first we propose a framework that defines descriptive attributes of CQA approaches. Second, we introduce a classification of all approaches with respect to problems they are aimed to solve. The classification is consequently employed in a review of a significant number of representative approaches, which are described by means of attributes from the descriptive framework. As a part of the survey, we also depict the current trends as well as highlight the areas that require further attention from the research community.", "title": "" }, { "docid": "85e5eb2818b46f7dc571600486aa10d6", "text": "Electronic commerce is an increasingly popular business model with a wide range of tools available to firms. An application that is becoming more common is the use of self-service technologies (SSTs), such as telephone banking, automated hotel checkout, and online investment trading, whereby customers produce services for themselves without assistance from firm employees. Widespread introduction of SSTs is apparent across industries, yet relatively little is known about why customers decide to try SSTs and why some SSTs are more widely accepted than others. In this research, the authors explore key factors that influence the initial SST trial decision, specifically focusing on actual behavior in situations in which the consumer has a choice among delivery modes. The authors show that the consumer readiness variables of role clarity, motivation, and ability are key mediators between established adoption constructs (innovation characteristics and individual differences) and the likelihood of trial.", "title": "" }, { "docid": "545bd32c5c64eed3b780768e1862168a", "text": "This position paper discusses AI challenges in the area of real–time strategy games and presents a research agenda aimed at improving AI performance in these popular multi– player computer games. RTS Games and AI Research Real–time strategy (RTS) games such as Blizzard Entertainment’s Starcraft(tm) and Warcraft(tm) series form a large and growing part of the multi–billion dollar computer games industry. In these games several players fight over resources, which are scattered over a terrain, by first setting up economies, building armies, and ultimately trying to eliminate all enemy units and buildings. The current AI performance in commercial RTS games is poor. The main reasons why the AI performance in RTS games is lagging behind developments in related areas such as classic board games are the following: • RTS games feature hundreds or even thousands of interacting objects, imperfect information, and fast–paced micro–actions. By contrast, World–class game AI systems mostly exist for turn–based perfect information games in which the majority of moves have global consequences and human planning abilities therefore can be outsmarted by mere enumeration. • Video games companies create titles under severe time constraints and do not have the resources and incentive (yet) to engage in AI research. • Multi–player games often do not require World–class AI performance in order to be commercially successful as long as there are enough human players interested in playing the game on–line. • RTS games are complex which means that it is not easy to set up an RTS game infrastructure for conducting AI experiments. Closed commercial RTS game software without AI interfaces does not help, either. The result is a lack of AI competition in this area which in the classic games sector is one of the most important driving forces of AI research. Copyright c © 2004, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. To get a feeling for the vast complexity of RTS games, imagine to play chess on a 512×512 board with hundreds of slow simultaneously moving pieces, player views restricted to small areas around their own pieces, and the ability to gather resources and create new material. While human players sometimes struggle with micro– managing all their objects, it is the incremental nature of the actions that allows them to outperform any existing RTS game AI. The difference to classic abstract games like chess and Othello in this respect is striking: many moves in these games have immediate global effects. This makes it hard for human players to consider deep variations with all their consequences. On the other hand, computers programs conducting full–width searches with selective extensions excel in complex combinatorial situations. A notable exception is the game of go in which — like in RTS games — moves often have only incremental effects and today’s best computer programs are still easily defeated by amateurs (Müller 2002). It is in these domains where the human abilities to abstract, generalize, reason, learn, and plan shine and the current commercial RTS AI systems — which do not reason nor adapt — fail. Other arguments in favor of AI research in RTS games are: • (RTS) games constitute well–defined environments to conduct experiments in and offer straight–forward objective ways of measuring performance, • RTS games can be tailored to focus on specific aspects such as how to win local fights, how to scout effectively, how to build, attack, and defend a town, etc., • Strong game AI will likely make a difference in future commercial games because graphics improvements are beginning to saturate. Furthermore, smarter robot enemies and allies definitely add to the game experience as they are available 24 hours a day and do not get tired. • The current state of RTS game AI is so bad that there are a lot of low–hanging fruits waiting to be picked. Examples include research on smart game interfaces that alleviate human players from tedious tasks such as manually concentrating fire in combat. Game AI can also help in the development of RTS games — for instance by providing tools for unit balancing. • Finally, progress in RTS game AI is also of interest for the military which uses battle simulations in training programs (Herz & Macedonia 2002) and also pursues research into autonomous weapon systems.", "title": "" }, { "docid": "a3afea380667f2f088f37ae9127fb05a", "text": "This paper presents a new distributed approach to detecting DDoS (distributed denial of services) flooding attacks at the traffic-flow level The new defense system is suitable for efficient implementation over the core networks operated by Internet service providers (ISPs). At the early stage of a DDoS attack, some traffic fluctuations are detectable at Internet routers or at the gateways of edge networks. We develop a distributed change-point detection (DCD) architecture using change aggregation trees (CAT). The idea is to detect abrupt traffic changes across multiple network domains at the earliest time. Early detection of DDoS attacks minimizes the floe cling damages to the victim systems serviced by the provider. The system is built over attack-transit routers, which work together cooperatively. Each ISP domain has a CAT server to aggregate the flooding alerts reported by the routers. CAT domain servers collaborate among themselves to make the final decision. To resolve policy conflicts at different ISP domains, a new secure infrastructure protocol (SIP) is developed to establish mutual trust or consensus. We simulated the DCD system up to 16 network domains on the Cyber Defense Technology Experimental Research (DETER) testbed, a 220-node PC cluster for Internet emulation experiments at the University of Southern California (USC) Information Science Institute. Experimental results show that four network domains are sufficient to yield a 98 percent detection accuracy with only 1 percent false-positive alarms. Based on a 2006 Internet report on autonomous system (AS) domain distribution, we prove that this DDoS defense system can scale well to cover 84 AS domains. This security coverage is wide enough to safeguard most ISP core networks from real-life DDoS flooding attacks.", "title": "" }, { "docid": "672fa729e41d20bdd396f9de4ead36b3", "text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.", "title": "" }, { "docid": "6b7bc505296093ded055e96bb344b42a", "text": "Cellular network operators are always seeking to increase the area of coverage of their networks, open up new markets and provide services to potential customers in remote rural areas. However, increased energy consumption, operator energy cost and the potential environmental impact of increased greenhouse gas emissions and the exhaustion of non-renewable energy resources (fossil fuel) pose major challenges to cellular network operators. The specific power supply needs for rural base stations (BSs) such as cost-effectiveness, efficiency, sustainability and reliability can be satisfied by taking advantage of the technological advances in renewable energy. This study investigates the possibility of decreasing both operational expenditure (OPEX) and greenhouse gas emissions with guaranteed sustainability and reliability for rural BSs using a solar photovoltaic/diesel generator hybrid power system. Three key aspects have been investigated: (i) energy yield, (ii) economic factors and (iii) greenhouse gas emissions. The results showed major benefits for mobile operators in terms of both environmental conservation and OPEX reduction, with an average annual OPEX savings of 43% to 47% based on the characteristics of solar radiation exposure in Malaysia. Finally, the paper compares the feasibility of using the proposed approach in a four-season country and compares the results against results obtained in Malaysia, which is a country with a tropical climate.", "title": "" }, { "docid": "41e04cbe2ca692cb65f2909a11a4eb5b", "text": "Bitcoin’s core innovation is its solution to double-spending, called Nakamoto consensus. This mechanism provides a probabilistic guarantee that transactions will not be reversed once they are sufficiently deep in the blockchain, assuming an attacker controls a bounded fraction of mining power in the network. We show, however, that when miners are rational this guarantee can be undermined by a whale attack in which an attacker issues an off-theblockchain whale transaction with an anomalously large transaction fee in an effort to convince miners to fork the current chain. We carry out a game-theoretic analysis and simulation of this attack, and show conditions under which it yields an expected positive payoff for the attacker.", "title": "" }, { "docid": "b08f67bc9b84088f8298b35e50d0b9c5", "text": "This review examines different nutritional guidelines, some case studies, and provides insights and discrepancies, in the regulatory framework of Food Safety Management of some of the world's economies. There are thousands of fermented foods and beverages, although the intention was not to review them but check their traditional and cultural value, and if they are still lacking to be classed as a category on different national food guides. For understanding the inconsistencies in claims of concerning fermented foods among various regulatory systems, each legal system should be considered unique. Fermented foods and beverages have long been a part of the human diet, and with further supplementation of probiotic microbes, in some cases, they offer nutritional and health attributes worthy of recommendation of regular consumption. Despite the impact of fermented foods and beverages on gastro-intestinal wellbeing and diseases, their many health benefits or recommended consumption has not been widely translated to global inclusion in world food guidelines. In general, the approach of the legal systems is broadly consistent and their structures may be presented under different formats. African traditional fermented products are briefly mentioned enhancing some recorded adverse effects. Knowing the general benefits of traditional and supplemented fermented foods, they should be a daily item on most national food guides.", "title": "" }, { "docid": "ba7b51dc253da1a17aaf12becb1abfed", "text": "This papers aims to design a new approach in order to increase the performance of the decision making in model-based fault diagnosis when signature vectors of various faults are identical or closed. The proposed approach consists on taking into account the knowledge issued from the reliability analysis and the model-based fault diagnosis. The decision making, formalised as a bayesian network, is established with a priori knowledge on the dynamic component degradation through Markov chains. The effectiveness and performances of the technique are illustrated on a heating water process corrupted by faults. Copyright © 2006 IFAC", "title": "" }, { "docid": "00357ea4ef85efe5cd2080e064ddcd06", "text": "The cumulative match curve (CMC) is used as a measure of 1: m identification system performance. It judges the ranking capabilities of an identification system. The receiver operating characteristic curve (ROC curve) of a verification system, on the other hand, expresses the quality of a 1:1 matcher. The ROC plots the false accept rate (FAR) of a 1:1 matcher versus the false reject rate (FRR) of the matcher. We show that the CMC is also related to the FAR and FRR of a 1:1 matcher, i.e., the matcher that is used to rank the candidates by sorting the scores. This has as a consequence that when a 1:1 matcher is used for identification, that is, for sorting match scores from high to low, the CMC does not offer any additional information beyond the FAR and FRR curves. The CMC is just another way of displaying the data and can be computed from the FAR and FRR.", "title": "" }, { "docid": "18f8d1fef840c1a4441b5949d6b97d9e", "text": "Geospatial web service of agricultural information has a wide variety of consumers. An operational agricultural service will receive considerable requests and process a huge amount of datasets each day. To ensure the service quality, many strategies have to be taken during developing and deploying agricultural information services. This paper presents a set of methods to build robust geospatial web service for agricultural information extraction and sharing. The service is designed to serve the public and handle heavy-load requests for a long-lasting term with least maintenance. We have developed a web service to validate our approach. The service is used to serve more than 10 TB data product of agricultural drought. The performance is tested. The result shows that the service has an excellent response time and the use of system resources is stable. We have plugged the service into an operational system for global drought monitoring. The statistics and feedbacks show our approach is feasible and efficient in operational web systems.", "title": "" }, { "docid": "b205efe2ce90ec2ee3a394dd01202b60", "text": "Recurrent Neural Networks (RNNs) is a sub type of neural networks that use feedback connections. Several types of RNN models are used in predicting financial time series. This study was conducted to develop models to predict daily stock prices of selected listed companies of Colombo Stock Exchange (CSE) based on Recurrent Neural Network (RNN) Approach and to measure the accuracy of the models developed and identify the shortcomings of the models if present. Feedforward, Simple Recurrent Neural Network (SRNN), Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM) architectures were employed in building models. Closing, High and Low prices of past two days were selected as input variables for each company. Feedforward networks produce the highest and lowest forecasting errors. The forecasting accuracy of the best feedforward networks is approximately 99%. SRNN and LSTM networks generally produce lower errors compared with feedforward networks but in some occasions, the error is higher than feed forward networks. Compared to other two networks, GRU networks are having comparatively higher forecasting errors.", "title": "" }, { "docid": "29549f0cb8b45d6b39e58c9a9237431f", "text": "Over the past 5 years, the advent of echocardiographic screening for rheumatic heart disease (RHD) has revealed a higher RHD burden than previously thought. In light of this global experience, the development of new international echocardiographic guidelines that address the full spectrum of the rheumatic disease process is opportune. Systematic differences in the reporting of and diagnostic approach to RHD exist, reflecting differences in local experience and disease patterns. The World Heart Federation echocardiographic criteria for RHD have, therefore, been developed and are formulated on the basis of the best available evidence. Three categories are defined on the basis of assessment by 2D, continuous-wave, and color-Doppler echocardiography: 'definite RHD', 'borderline RHD', and 'normal'. Four subcategories of 'definite RHD' and three subcategories of 'borderline RHD' exist, to reflect the various disease patterns. The morphological features of RHD and the criteria for pathological mitral and aortic regurgitation are also defined. The criteria are modified for those aged over 20 years on the basis of the available evidence. The standardized criteria aim to permit rapid and consistent identification of individuals with RHD without a clear history of acute rheumatic fever and hence allow enrollment into secondary prophylaxis programs. However, important unanswered questions remain about the importance of subclinical disease (borderline or definite RHD on echocardiography without a clinical pathological murmur), and about the practicalities of implementing screening programs. These standardized criteria will help enable new studies to be designed to evaluate the role of echocardiographic screening in RHD control.", "title": "" } ]
scidocsrr
67f6ef8cf4238ed8554b090179401fe8
TransCut: Transparent Object Segmentation from a Light-Field Image
[ { "docid": "80b514540933a9cc31136c8cb86ec9b3", "text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.", "title": "" } ]
[ { "docid": "b1a0a76e73aa5b0a893e50b2fadf0ad2", "text": "The field of occupational therapy, as with all facets of health care, has been profoundly affected by the changing climate of health care delivery. The combination of cost-effectiveness and quality of care has become the benchmark for and consequent drive behind the rise of managed health care delivery systems. The spawning of outcomes research is in direct response to the need for comparative databases to provide results of effectiveness in health care treatment protocols, evaluations of health-related quality of life, and cost containment measures. Outcomes management is the application of outcomes research data by all levels of health care providers. The challenges facing occupational therapists include proving our value in an economic trend of downsizing, competing within the medical profession, developing and affiliating with new payer sources, and reengineering our careers to meet the needs of the new, nontraditional health care marketplace.", "title": "" }, { "docid": "d3049fee1ed622515f5332bcfa3bdd7b", "text": "PURPOSE\nTo prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications.\n\n\nMETHODS\nNineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility.\n\n\nRESULTS\nTwenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination.\n\n\nCONCLUSIONS\nRigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention.\n\n\nTYPE OF STUDY/LEVEL OF EVIDENCE\nTherapeutic II.", "title": "" }, { "docid": "f1d7e1b222e1ae313c3e751e8ba443f3", "text": "INTRODUCTION\nLapatinib, an orally active tyrosine kinase inhibitor of epidermal growth factor receptor ErbB1 (EGFR) and ErbB2 (HER2), has activity as monotherapy and in combination with chemotherapy in HER2-overexpressing metastatic breast cancer (MBC).\n\n\nMETHODS\nThis phase II single-arm trial assessed the safety and efficacy of first-line lapatinib in combination with paclitaxel in previously untreated patients with HER2-overexpressing MBC. The primary endpoint was the overall response rate (ORR). Secondary endpoints were the duration of response (DoR), time to response, time to progression, progression-free survival (PFS), overall survival, and the incidence and severity of adverse events. All endpoints were investigator- and independent review committee (IRC)-assessed.\n\n\nRESULTS\nThe IRC-assessed ORR was 51% (29/57 patients with complete or partial response) while the investigator-assessed ORR was 77% (44/57). As per the IRC, the median DoR was 39.7 weeks, and the median PFS was 47.9 weeks. The most common toxicities were diarrhea (56%), neutropenia (44%), rash (40%), fatigue (25%), and peripheral sensory neuropathy (25%).\n\n\nCONCLUSIONS\nFirst-line lapatinib plus paclitaxel for HER2-overexpressing MBC produced an encouraging ORR with manageable toxicities. This combination may be useful in first-line treatment for patients with HER2-overexpressing MBC and supports the ongoing evaluation of this combination as first-line therapy in HER2-overexpressing MBC.", "title": "" }, { "docid": "617ec3be557749e0646ad7092a1afcb6", "text": "The difficulty of directly measuring gene flow has lead to the common use of indirect measures extrapolated from genetic frequency data. These measures are variants of FST, a standardized measure of the genetic variance among populations, and are used to solve for Nm, the number of migrants successfully entering a population per generation. Unfortunately, the mathematical model underlying this translation makes many biologically unrealistic assumptions; real populations are very likely to violate these assumptions, such that there is often limited quantitative information to be gained about dispersal from using gene frequency data. While studies of genetic structure per se are often worthwhile, and FST is an excellent measure of the extent of this population structure, it is rare that FST can be translated into an accurate estimate of Nm.", "title": "" }, { "docid": "a92efa40799017f16c9ae624b97d02aa", "text": "BLEU is the de facto standard automatic evaluation metric in machine translation. While BLEU is undeniably useful, it has a number of limitations. Although it works well for large documents and multiple references, it is unreliable at the sentence or sub-sentence levels, and with a single reference. In this paper, we propose new variants of BLEU which address these limitations, resulting in a more flexible metric which is not only more reliable, but also allows for more accurate discriminative training. Our best metric has better correlation with human judgements than standard BLEU, despite using a simpler formulation. Moreover, these improvements carry over to a system tuned for our new metric.", "title": "" }, { "docid": "05e4168615c39071bb9640bd5aa6f3d9", "text": "The intestinal microbiome plays an important role in the metabolism of chemical compounds found within food. Bacterial metabolites are different from those that can be generated by human enzymes because bacterial processes occur under anaerobic conditions and are based mainly on reactions of reduction and/or hydrolysis. In most cases, bacterial metabolism reduces the activity of dietary compounds; however, sometimes a specific product of bacterial transformation exhibits enhanced properties. Studies on the metabolism of polyphenols by the intestinal microbiota are crucial for understanding the role of these compounds and their impact on our health. This review article presents possible pathways of polyphenol metabolism by intestinal bacteria and describes the diet-derived bioactive metabolites produced by gut microbiota, with a particular emphasis on polyphenols and their potential impact on human health. Because the etiology of many diseases is largely correlated with the intestinal microbiome, a balance between the host immune system and the commensal gut microbiota is crucial for maintaining health. Diet-related and age-related changes in the human intestinal microbiome and their consequences are summarized in the paper.", "title": "" }, { "docid": "b50498964a73a59f54b3a213f2626935", "text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "title": "" }, { "docid": "5481f319296c007412e62129d2ec5943", "text": "We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard evidence lower bound. We provide conditions under which they recover the data distribution and learn latent features, and formally show that common issues such as blurry samples and uninformative latent features arise when these conditions are not met. Based on these new insights, we propose a new sequential VAE model that can generate sharp samples on the LSUN image dataset based on pixel-wise reconstruction loss, and propose an optimization criterion that encourages unsupervised learning of informative latent features.", "title": "" }, { "docid": "316e4fa32d0b000e6f833d146a9e0d80", "text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.", "title": "" }, { "docid": "f8d256bf6fea179847bfb4cc8acd986d", "text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.", "title": "" }, { "docid": "e5edb616b5d0664cf8108127b0f8684c", "text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.", "title": "" }, { "docid": "64d839525e2d9c71478d862a30aa0277", "text": "The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a Bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.", "title": "" }, { "docid": "db7bc8bbfd7dd778b2900973f2cfc18d", "text": "In this paper, the self-calibration of micromechanical acceleration sensors is considered, specifically, based solely on user-generated movement data without the support of laboratory equipment or external sources. The autocalibration algorithm itself uses the fact that under static conditions, the squared norm of the measured sensor signal should match the magnitude of the gravity vector. The resulting nonlinear optimization problem is solved using robust statistical linearization instead of the common analytical linearization for computing bias and scale factors of the accelerometer. To control the forgetting rate of the calibration algorithm, artificial process noise models are developed and compared with conventional ones. The calibration methodology is tested using arbitrarily captured acceleration profiles of the human daily routine and shows that the developed algorithm can significantly reject any misconfiguration of the acceleration sensor.", "title": "" }, { "docid": "f7792dbc29356711c2170d5140030142", "text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.", "title": "" }, { "docid": "b39d1f4f6caed09030a87faeb2c1beeb", "text": "In the present paper we examine the moderating effects of age diversity and team coordination on the relationship between shared leadership and team performance. Using a field sample of 96 individuals in 26 consulting project teams, team members assessed their team’s shared leadership and coordination. Six to eight weeks later, supervisors rated their teams’ performance. Results indicated that shared leadership predicted team performance and both age diversity and coordination moderated the impact of shared leadership on team performance. Thereby shared leadership was positively related to team performance when age diversity and coordination were low, whereas higher levels of age diversity and coordination appeared to compensate for lower levels of shared leadership effectiveness. In particular strong effects of shared leadership on team performance were evident when both age diversity and coordination were low, whereas shared leadership was not related to team performance when both age diversity and coordination were high.", "title": "" }, { "docid": "d045e59441a16874f3ccb1d8068e4e6d", "text": "In two experiments, we tested the hypotheses that (a) the difference between liars and truth tellers will be greater when interviewees report their stories in reverse order than in chronological order, and (b) instructing interviewees to recall their stories in reverse order will facilitate detecting deception. In Experiment 1, 80 mock suspects told the truth or lied about a staged event and did or did not report their stories in reverse order. The reverse order interviews contained many more cues to deceit than the control interviews. In Experiment 2, 55 police officers watched a selection of the videotaped interviews of Experiment 1 and made veracity judgements. Requesting suspects to convey their stories in reverse order improved police observers' ability to detect deception and did not result in a response bias.", "title": "" }, { "docid": "a0c126480f0bce527a893853f6f3bec9", "text": "Word problems are an established technique for teaching mathematical modeling skills in K-12 education. However, many students find word problems unconnected to their lives, artificial, and uninteresting. Most students find them much more difficult than the corresponding symbolic representations. To account for this phenomenon, an ideal pedagogy might involve an individually crafted progression of unique word problems that form a personalized plot. We propose a novel technique for automatic generation of personalized word problems. In our system, word problems are generated from general specifications using answer-set programming (ASP). The specifications include tutor requirements (properties of a mathematical model), and student requirements (personalization, characters, setting). Our system takes a logical encoding of the specification, synthesizes a word problem narrative and its mathematical model as a labeled logical plot graph, and realizes the problem in natural language. Human judges found our problems as solvable as the textbook problems, with a slightly more artificial language.", "title": "" }, { "docid": "5f20df3abf9a4f7944af6b3afd16f6f8", "text": "An important step towards the successful integration of information and communication technology (ICT) in schools is to facilitate their capacity to develop a school-based ICT policy resulting in an ICT policy plan. Such a plan can be defined as a school document containing strategic and operational elements concerning the integration of ICT in education. To write such a plan in an efficient way is challenging for schools. Therefore, an online tool [Planning for ICT in Schools (pICTos)] has been developed to guide schools in this process. A multiple case study research project was conducted with three Flemish primary schools to explore the process of developing a school-based ICT policy plan and the supportive role of pICTos within this process. Data from multiple sources (i.e. interviews with school leaders and ICT coordinators, school policy documents analysis and a teacher questionnaire) were collected and analysed. The results indicate that schools shape their ICT policy based on specific school data collected and presented by the pICTos environment. School teams learned about the actual and future place of ICT in teaching and learning. Consequently, different policy decisions were made according to each school’s vision on ‘good’ education and ICT integration.", "title": "" }, { "docid": "b4f06236b0babb6cd049c8914170d7bf", "text": "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.", "title": "" }, { "docid": "8224818f838fd238879dca0a4b5531c1", "text": "Intelligence plays an important role in supporting military operations. In the course of military intelligence a vast amount of textual data in different languages needs to be analyzed. In addition to information provided by traditional military intelligence, nowadays the internet offers important resources of potential militarily relevant information. However, we are not able to manually handle this vast amount of data. The science of natural language processing (NLP) provides technology to efficiently handle this task, in particular by means of machine translation and text mining. In our research project ISAF-MT we created a statistical machine translation (SMT) system for Dari to German. In this paper we describe how NLP technologies and in particular SMT can be applied to different intelligence processes. We therefore argue that multilingual NLP technology can strongly support military operations.", "title": "" } ]
scidocsrr
881147bbfc9ba324f0ebecf010dec1e3
Characterizing pseudoentropy and simplifying pseudorandom generator constructions
[ { "docid": "7259530c42f4ba91155284ce909d25a6", "text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.", "title": "" } ]
[ { "docid": "0788352b51fb48c27ca14110fdaee8a9", "text": "As a complement to high-layer encryption techniques, physical layer security has been widely recognized as a promising way to enhance wireless security by exploiting the characteristics of wireless channels, including fading, noise, and interference. In order to enhance the received signal power at legitimate receivers and impair the received signal quality at eavesdroppers simultaneously, multiple-antenna techniques have been proposed for physical layer security to improve secrecy performance via exploiting spatial degrees of freedom. This paper provides a comprehensive survey on various multiple-antenna techniques in physical layer security, with an emphasis on transmit beamforming designs for multiple-antenna nodes. Specifically, we provide a detailed investigation on multiple-antenna techniques for guaranteeing secure communications in point-to-point systems, dual-hop relaying systems, multiuser systems, and heterogeneous networks. Finally, future research directions and challenges are identified.", "title": "" }, { "docid": "70f8d5a6d6ff36dd669403d7865bab94", "text": "Addressing the problem of information overload, automatic multi-document summarization (MDS) has been widely utilized in the various real-world applications. Most of existing approaches adopt term-based representation for documents which limit the performance of MDS systems. In this paper, we proposed a novel unsupervised pattern-enhanced topic model (PETMSum) for the MDS task. PETMSum combining pattern mining techniques with LDA topic modelling could generate discriminative and semantic rich representations for topics and documents so that the most representative, non-redundant, and topically coherent sentences can be selected automatically to form a succinct and informative summary. Extensive experiments are conducted on the data of document understanding conference (DUC) 2006 and 2007. The results prove the effectiveness and efficiency of our proposed approach.", "title": "" }, { "docid": "9d82ce8e6630a9432054ed97752c7ec6", "text": "Development is the powerful process involving a genome in the transformation from one egg cell to a multicellular organism with many cell types. The dividing cells manage to organize and assign themselves special, differentiated roles in a reliable manner, creating a spatio-temporal pattern and division of labor. This despite the fact that little positional information may be available to them initially to guide this patterning. Inspired by a model of developmental biologist L. Wolpert, we simulate this situation in an evolutionary setting where individuals have to grow into “French flag” patterns. The cells in our model exist in a 2-layer Potts model physical environment. Controlled by continuous genetic regulatory networks, identical for all cells of one individual, the cells can individually differ in parameters including target volume, shape, orientation, and diffusion. Intercellular communication is possible via secretion and sensing of diffusing morphogens. Evolved individuals growing from a single cell can develop the French flag pattern by setting up and maintaining asymmetric morphogen gradients – a behavior predicted by several theoretical models.", "title": "" }, { "docid": "0186c053103d06a8ddd054c3c05c021b", "text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.", "title": "" }, { "docid": "4cfe3df75371f28485fe74c099fd75e7", "text": "This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs) are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%.", "title": "" }, { "docid": "863c806d29c15dd9b9160eae25316dfc", "text": "This paper presents new structural statistical matrices which are gray level size zone matrix (SZM) texture descriptor variants. The SZM is based on the cooccurrences of size/intensity of each flat zone (connected pixels with the same gray level). The first improvement increases the information processed by merging multiple gray-level quantizations and reduces the required parameter numbers. New improved descriptors were especially designed for supervised cell texture classification. They are illustrated thanks to two different databases built from quantitative cell biology. The second alternative characterizes the DNA organization during the mitosis, according to zone intensities radial distribution. The third variant is a matrix structure generalization for the fibrous texture analysis, by changing the intensity/size pair into the length/orientation pair of each region.", "title": "" }, { "docid": "876ee0ecb1b6196a19fb2ab85b86f19d", "text": "This paper presents new experimental data and an improved mechanistic model for the Gas-Liquid Cylindrical Cyclone (GLCC) separator. The data were acquired utilizing a 3” ID laboratory-scale GLCC, and are presented along with a limited number of field data. The data include measurements of several parameters of the flow behavior and the operational envelope of the GLCC. The operational envelope defines the conditions for which there will be no liquid carry-over or gas carry-under. The developed model enables the prediction of the hydrodynamic flow behavior in the GLCC, including the operational envelope, equilibrium liquid level, vortex shape, velocity and holdup distributions and pressure drop across the GLCC. The predictions of the model are compared with the experimental data. These provide the state-of-the-art for the design of GLCC’s for the industry. Introduction The gas-liquid separation technology currently used by the petroleum industry is mostly based on the vessel-type separator which is large, heavy and expensive to purchase and operate. This technology has not been substantially improved over the last several decades. In recent years the industry has shown interest in the development and application of alternatives to the vessel-type separator. One such alternative is the use of compact or in-line separators, such as the Gas-Liquid Cylindrical Cyclone (GLCC) separator. As can be seen in Fig. 1, the GLCC is an emerging class of vertical compact separators, as compared to the very mature technology of the vessel-type separator. D ev el op m en t GLCC’s FWKO Cyclones Emerging Gas Cyclones Conventional Horizontal and Vertical Separators Growth Finger Storage Slug Catcher Vessel Type Slug Catcher", "title": "" }, { "docid": "80ac8b65b7c125fa98537be327f5f200", "text": "Occupational science is an emerging basic science which supports the practice of occupational therapy. Its roots in the rich traditions of occupational therapy are explored and its current configuration is introduced. Specifications which the science needs to meet as it is further developed and refined are presented. Compatible disciplines and research approaches are identified. example's of basic science research questions and their potential contributions to occupational therapy practice are suggested.", "title": "" }, { "docid": "806a83d17d242a7fd5272862158db344", "text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.", "title": "" }, { "docid": "385b573c33a9e4f81afd966c9277c0c1", "text": "According to American College of Rheumatology fibromyalgia syndrome (FMS) is a common health problem characterized by widespread pain and tenderness. The pain and tenderness, although chronic, present a tendency to fluctuate both in intensity and location around the body. Patients with FMS experience fatigue and often have sleep disorders. It is estimated that FMS affects two to four percent of the general population. It is most common in women, though it can also occur in men. FMS most often first occur in the middle adulthood, but it can start as early as in the teen years or in the old age. The causes of FMS are unclear. Various infectious agents have recently been linked with the development of FMS. Some genes are potentially linked with an increased risk of developing FMS and some other health problems, which are common comorbidities to FMS. It is the genes that determine individual sensitivity and reaction to pain, quality of the antinociceptive system and complex biochemistry of pain sensation. Diagnosis and therapy may be complex and require cooperation of many specialists. Rheumatologists often make the diagnosis and differentiate FMS with other disorders from the rheumatoid group. FMS patients may also require help from the Psychiatric Clinic (Out-Patients Clinic) due to accompanying mental problems. As the pharmacological treatment options are limited and only complex therapy gives relatively good results, the treatment plan should include elements of physical therapy.", "title": "" }, { "docid": "d2cf6c5241e2169c59cfbb39bf3d09bb", "text": "As remote exploits further dwindle and perimeter defenses become the standard, remote client-side attacks are becoming the standard vector for attackers. Modern operating systems have quelled the explosion of client-side vulnerabilities using mitigation techniques such as data execution prevention (DEP) and address space layout randomization (ASLR). This work illustrates two novel techniques to bypass these mitigations. The two techniques leverage the attack surface exposed by the script interpreters commonly accessible within the browser. The first technique, pointer inference, is used to find the memory address of a string of shellcode within the Adobe Flash Player's ActionScript interpreter despite ASLR. The second technique, JIT spraying, is used to write shellcode to executable memory, bypassing DEP protections, by leveraging predictable behaviors of the ActionScript JIT compiler. Previous attacks are examined and future research directions are discussed.", "title": "" }, { "docid": "ff4e26c7770898dbd753e33c1ced1a1b", "text": "Large mammals, including humans, save much of the energy needed for running by means of elastic structures in their legs and feet1,2. Kinetic and potential energy removed from the body in the first half of the stance phase is stored briefly as elastic strain energy and then returned in the second half by elastic recoil. Thus the animal runs in an analogous fashion to a rubber ball bouncing along. Among the elastic structures involved, the tendons of distal leg muscles have been shown to be important2,3. Here we show that the elastic properties of the arch of the human foot are also important.", "title": "" }, { "docid": "d5bc87dc8c93d2096f048437315e6634", "text": "The diversity of an ensemble can be calculated in a variety of ways. Here a diversity metric and a means for altering the diversity of an ensemble, called “thinning”, are introduced. We experiment with thinning algorithms evaluated on ensembles created by several techniques on 22 publicly available datasets. When compared to other methods, our percentage correct diversity measure algorithm shows a greater correlation between the increase in voted ensemble accuracy and the diversity value. Also, the analysis of different ensemble creation methods indicates each has varying levels of diversity. Finally, the methods proposed for thinning again show that ensembles can be made smaller without loss in accuracy. Information Fusion Journal", "title": "" }, { "docid": "e0f7c82754694084c6d05a2d37be3048", "text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.", "title": "" }, { "docid": "0bce954374d27d4679eb7562350674fc", "text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.", "title": "" }, { "docid": "2fb5f1e17e888049bd0f506f3a37f377", "text": "While the Semantic Web has evolved to support the meaningful exchange of heterogeneous data through shared and controlled conceptualisations, Web 2.0 has demonstrated that large-scale community tagging sites can enrich the semantic web with readily accessible and valuable knowledge. In this paper, we investigate the integration of a movies folksonomy with a semantic knowledge base about usermovie rentals. The folksonomy is used to enrich the knowledge base with descriptions and categorisations of movie titles, and user interests and opinions. Using tags harvested from the Internet Movie Database, and movie rating data gathered by Netflix, we perform experiments to investigate the question that folksonomy-generated movie tag-clouds can be used to construct better user profiles that reflect a user’s level of interest in different kinds of movies, and therefore, provide a basis for prediction of their rating for a previously unseen movie.", "title": "" }, { "docid": "6dbc238948d555578039ed268f3d4f51", "text": "Chidi Okafor, David M. Ward, Michele H. Mokrzycki, Robert Weinstein, Pamela Clark, and Rasheed A. Balogun* Department of Medicine, Division of Nephrology, University of Virginia Health System, Charlottesville, Virginia Department of Medicine, University of California, San Diego, California Department of Medicine, Albert Einstein College of Medicine, Bronx, New York Departments of Medicine and Pathology, University of Massachusetts, Amherst, Massachusetts Department of Pathology, University of Virginia, Charlottesville, Virginia", "title": "" }, { "docid": "5157063545b7ec7193126951c3bdb850", "text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.", "title": "" }, { "docid": "f6e8f2f990ca60a5b659c1c7a19d0638", "text": "OBJECTIVE\nTo develop an understanding of the stability of mental health during imprisonment through review of existing research evidence relating physical prison environment to mental state changes in prisoners.\n\n\nMETHOD\nA systematic literature search was conducted looking at changes in mental state and how this related to various aspects of imprisonment and the prison environment.\n\n\nRESULTS\nFifteen longitudinal studies were found, and from these, three broad themes were delineated: being imprisoned and aspects of the prison regime; stage of imprisonment and duration of sentence; and social density. Reception into prison results in higher levels of psychiatric symptoms that seem to improve over time; otherwise, duration of imprisonment appears to have no significant impact on mental health. Regardless of social density, larger prisons are associated with poorer mental state, as are extremes of social density.\n\n\nCONCLUSION\nThere are large gaps in the literature relating prison environments to changes in mental state; in particular, high-quality longitudinal studies are needed. Existing research suggests that although entry to prison may be associated with deterioration in mental state, it tends to improve with time. Furthermore, overcrowding, ever more likely as prison populations rise, is likely to place a particular burden on mental health services.", "title": "" } ]
scidocsrr
981688ee2081d695d8ec1090608de8b8
DENFIS: dynamic evolving neural-fuzzy inference system and its application for time-series prediction
[ { "docid": "b6de0b3fb29edff86afc4fadac687e9d", "text": "An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the \"neural gas\" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.", "title": "" }, { "docid": "93dba45f5309d77b63c8957609f146b7", "text": "Research papers available on the World Wide Web (WWW or Web) areoften poorly organized, often exist in forms opaque to searchengines (e.g. Postscript), and increase in quantity daily.Significant amounts of time and effort are typically needed inorder to find interesting and relevant publications on the Web. Wehave developed a Web based information agent that assists the userin the process of performing a scientific literature search. Givena set of keywords, the agent uses Web search engines and heuristicsto locate and download papers. The papers are parsed in order toextract information features such as the abstract and individuallyidentified citations. The agents Web interface can be used to findrelevant papers in the database using keyword searches, or bynavigating the links between papers formed by the citations. Linksto both citing and cited publications can be followed. In additionto simple browsing and keyword searches, the agent can find paperswhich are similar to a given paper using word information and byanalyzing common citations made by the papers.", "title": "" } ]
[ { "docid": "b1394b4534d1a2d62767f885c180903b", "text": "OBJECTIVE\nTo determine the value of measuring fetal femur and humerus length at 11-14 weeks of gestation in screening for chromosomal defects.\n\n\nMETHODS\nFemur and humerus lengths were measured using transabdominal ultrasound in 1018 fetuses immediately before chorionic villus sampling for karyotyping at 11-14 weeks of gestation. In the group of chromosomally normal fetuses, regression analysis was used to determine the association between long bone length and crown-rump length (CRL). Femur and humerus lengths in fetuses with trisomy 21 were compared with those of normal fetuses.\n\n\nRESULTS\nThe median gestation was 12 (range, 11-14) weeks. The karyotype was normal in 920 fetuses and abnormal in 98, including 65 cases of trisomy 21. In the chromosomally normal group the fetal femur and humerus lengths increased significantly with CRL (femur length = - 6.330 + 0.215 x CRL in mm, r = 0.874, P < 0.0001; humerus length = - 6.240 + 0.220 x CRL in mm, r = 0.871, P < 0.0001). In the Bland-Altman plot the mean difference between paired measurements of femur length was 0.21 mm (95% limits of agreement - 0.52 to 0.48 mm) and of humerus length was 0.23 mm (95% limits of agreement - 0.57 to 0.55 mm). In the trisomy 21 fetuses the median femur and humerus lengths were significantly below the appropriate normal mean for CRL by 0.4 and 0.3 mm, respectively (P = 0.002), but they were below the respective 5th centile of the normal range in only six (9.2%) and three (4.6%) of the cases, respectively.\n\n\nCONCLUSION\nAt 11-14 weeks of gestation the femur and humerus lengths in trisomy 21 fetuses are significantly reduced but the degree of deviation from normal is too small for these measurements to be useful in screening for trisomy 21.", "title": "" }, { "docid": "840c42456a69d20deead9f8574f6ee14", "text": "Millimeter wave (mmWave) is a promising approach for the fifth generation cellular networks. It has a large available bandwidth and high gain antennas, which can offer interference isolation and overcome high frequency-dependent path loss. In this paper, we study the non-uniform heterogeneous mmWave network. Non-uniform heterogeneous networks are more realistic in practical scenarios than traditional independent homogeneous Poisson point process (PPP) models. We derive the signal-to-noise-plus-interference ratio (SINR) and rate coverage probabilities for a two-tier non-uniform millimeter-wave heterogeneous cellular network, where the macrocell base stations (MBSs) are deployed as a homogeneous PPP and the picocell base stations (PBSs) are modeled as a Poisson hole process (PHP), dependent on the MBSs. Using tools from stochastic geometry, we derive the analytical results for the SINR and rate coverage probabilities. The simulation results validate the analytical expressions. Furthermore, we find that there exists an optimum density of the PBS that achieves the best coverage probability and the change rule with different radii of the exclusion region. Finally, we show that as expected, mmWave outperforms microWave cellular network in terms of rate coverage probability for this system.", "title": "" }, { "docid": "357e03d12dc50cf5ce27cadd50ac99fa", "text": "This paper presents a linear solution for reconstructing the 3D trajectory of a moving point from its correspondence in a collection of 2D perspective images, given the 3D spatial pose and time of capture of the cameras that produced each image. Triangulation-based solutions do not apply, as multiple views of the point may not exist at each instant in time. A geometric analysis of the problem is presented and a criterion, called reconstructibility, is defined to precisely characterize the cases when reconstruction is possible, and how accurate it can be. We apply the linear reconstruction algorithm to reconstruct the time evolving 3D structure of several real-world scenes, given a collection of non-coincidental 2D images.", "title": "" }, { "docid": "1ace2a8a8c6b4274ac0891e711d13190", "text": "Recent music information retrieval (MIR) research pays increasing attention to music classification based on moods expressed by music pieces. The first Audio Mood Classification (AMC) evaluation task was held in the 2007 running of the Music Information Retrieval Evaluation eXchange (MIREX). This paper describes important issues in setting up the task, including dataset construction and ground-truth labeling, and analyzes human assessments on the audio dataset, as well as system performances from various angles. Interesting findings include system performance differences with regard to mood clusters and the levels of agreement amongst human judgments regarding mood labeling. Based on these analyses, we summarize experiences learned from the first community scale evaluation of the AMC task and propose recommendations for future AMC and similar evaluation tasks.", "title": "" }, { "docid": "3840043afe85979eb901ad05b5b8952f", "text": "Cross media retrieval systems have received increasing interest in recent years. Due to the semantic gap between low-level features and high-level semantic concepts of multimedia data, many researchers have explored joint-model techniques in cross media retrieval systems. Previous joint-model approaches usually focus on two traditional ways to design cross media retrieval systems: (a) fusing features from different media data; (b) learning different models for different media data and fusing their outputs. However, the process of fusing features or outputs will lose both low- and high-level abstraction information of media data. Hence, both ways do not really reveal the semantic correlations among the heterogeneous multimedia data. In this paper, we introduce a novel method for the cross media retrieval task, named Parallel Field Alignment Retrieval (PFAR), which integrates a manifold alignment framework from the perspective of vector fields. Instead of fusing original features or outputs, we consider the cross media retrieval as a manifold alignment problem using parallel fields. The proposed manifold alignment algorithm can effectively preserve the metric of data manifolds, model heterogeneous media data and project their relationship into intermediate latent semantic spaces during the process of manifold alignment. After the alignment, the semantic correlations are also determined. In this way, the cross media retrieval task can be resolved by the determined semantic correlations. Comprehensive experimental results have demonstrated the effectiveness of our approach.", "title": "" }, { "docid": "cb00e564a81ace6b75e776f1fe41fb8f", "text": "INDIVIDUAL PROCESSES IN INTERGROUP BEHAVIOR ................................ 3 From Individual to Group Impressions ...................................................................... 3 GROUP MEMBERSHIP AND INTERGROUP BEHAVIOR .................................. 7 The Scope and Range of Ethnocentrism .................................................................... 8 The Development of Ethnocentrism .......................................................................... 9 Intergroup Conflict and Competition ........................................................................ 12 Interpersonal and intergroup behavior ........................................................................ 13 Intergroup conflict and group cohesion ........................................................................ 15 Power and status in intergroup behavior ...................................................................... 16 Social Categorization and Intergroup Behavior ........................................................ 20 Social categorization: cognitions, values, and groups ...................................................... 20 Social categorization a d intergroup discrimination ...................................................... 23 Social identity and social comparison .......................................................................... 24 THE REDUCTION FINTERGROUP DISCRIMINATION ................................ 27 Intergroup Cooperation and Superordinate Goals \" 28 Intergroup Contact. .... ................................................................................................ 28 Multigroup Membership and \"lndividualizat~’on\" of the Outgroup .......................... 29 SUMMARY .................................................................................................................... 30", "title": "" }, { "docid": "6d41ec322f71c32195119807f35fde53", "text": "Input current distortion in the vicinity of input voltage zero crossings of boost single-phase power factor corrected (PFC) ac-dc converters is studied in this paper. Previously known causes for the zero-crossing distortion are reviewed and are shown to be inadequate in explaining the observed input current distortion, especially under high ac line frequencies. A simple linear model is then presented which reveals two previously unknown causes for zero-crossing distortion, namely, the leading phase of the input current and the lack of critical damping in the current loop. Theoretical and practical limitations in reducing the phase lead and increasing the damping factor are discussed. A simple phase compensation technique to reduce the zero-crossing distortion is also presented. Numerical simulation and experimental results are presented to validate the theory.", "title": "" }, { "docid": "b97e58184a94d6827bf294a3b1f91687", "text": "A good and robust sensor data fusion in diverse weather conditions is a quite challenging task. There are several fusion architectures in the literature, e.g. the sensor data can be fused right at the beginning (Early Fusion), or they can be first processed separately and then concatenated later (Late Fusion). In this work, different fusion architectures are compared and evaluated by means of object detection tasks, in which the goal is to recognize and localize predefined objects in a stream of data. Usually, state-of-the-art object detectors based on neural networks are highly optimized for good weather conditions, since the well-known benchmarks only consist of sensor data recorded in optimal weather conditions. Therefore, the performance of these approaches decreases enormously or even fails in adverse weather conditions. In this work, different sensor fusion architectures are compared for good and adverse weather conditions for finding the optimal fusion architecture for diverse weather situations. A new training strategy is also introduced such that the performance of the object detector is greatly enhanced in adverse weather scenarios or if a sensor fails. Furthermore, the paper responds to the question if the detection accuracy can be increased further by providing the neural network with a-priori knowledge such as the spatial calibration of the sensors.", "title": "" }, { "docid": "76d10dc3b823d7cae01269b2b7f15745", "text": "The new challenge for designers and HCI researchers is to develop software tools for effective e-learning. Learner-Centered Design (LCD) provides guidelines to make new learning domains accessible in an educationally productive manner. A number of new issues have been raised because of the new \"vehicle\" for education. Effective e-learning systems should include sophisticated and advanced functions, yet their interface should hide their complexity, providing an easy and flexible interaction suited to catch students' interest. In particular, personalization and integration of learning paths and communication media should be provided.It is first necessary to dwell upon the difference between attributes for platforms (containers) and for educational modules provided by a platform (contents). In both cases, it is hard to go deeply into pedagogical issues of the provided knowledge content. This work is a first step towards identifying specific usability attributes for e-learning systems, capturing the peculiar features of this kind of applications. We report about a preliminary users study involving a group of e-students, observed during their interaction with an e-learning system in a real situation. We then propose to adapt to the e-learning domain the so called SUE (Systematic Usability Evaluation) inspection, providing evaluation patterns able to drive inspectors' activities in the evaluation of an e-learning tool.", "title": "" }, { "docid": "33d36d081564bb08e95323b17945e86b", "text": "Sparse matrix-vector multiplication (SpMV) is an important kernel in scientific and engineering computing. Straightforward parallel implementations of SpMV often perform poorly, and with the increasing variety of architectural features in multicore processors, it is getting more difficult to determine the sparse matrix data structure and corresponding SpMV implementation that optimize performance. In this paper we present pOSKI, an autotuning system for SpMV that automatically searches over a large set of possible data structures and implementations to optimize SpMV performance on multicore platforms. pOSKI explores a design space that depends on both the nonzero pattern of the sparse matrix, typically not known until run-time, and the architecture, which is explored off-line as much as possible, in order to reduce tuning time. We demonstrate significant performance improvements compared to previous serial and parallel implementations, and compare performance to upper bounds based on architectural models. General Terms: Design, Experimentation, Performance Additional", "title": "" }, { "docid": "72aef0bd0793116983c11883ebfb5525", "text": "Building facade classification by architectural styles allows categorization of large databases of building images into semantic categories belonging to certain historic periods, regions and cultural influences. Image databases sorted by architectural styles permit effective and fast image search for the purposes of content-based image retrieval, 3D reconstruction, 3D city-modeling, virtual tourism and indexing of cultural heritage buildings. Building facade classification is viewed as a task of classifying separate architectural structural elements, like windows, domes, towers, columns, etc, as every architectural style applies certain rules and characteristic forms for the design and construction of the structural parts mentioned. In the context of building facade architectural style classification the current paper objective is to classify the architectural style of facade windows. Typical windows belonging to Romanesque, Gothic and Renaissance/Baroque European main architectural periods are classified. The approach is based on clustering and learning of local features, applying intelligence that architects use to classify windows of the mentioned architectural styles in the training stage.", "title": "" }, { "docid": "07b889a2b1a18bc1f91021f3b889474a", "text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.", "title": "" }, { "docid": "e45fe4344cf0d6c3077389ea73e427c6", "text": "Vehicle tracking data is an essential “raw” material for a broad range of applications such as traffic management and control, routing, and navigation. An important issue with this data is its accuracy. The method of sampling vehicular movement using GPS is affected by two error sources and consequently produces inaccurate trajectory data. To become useful, the data has to be related to the underlying road network by means of map matching algorithms. We present three such algorithms that consider especially the trajectory nature of the data rather than simply the current position as in the typical map-matching case. An incremental algorithm is proposed that matches consecutive portions of the trajectory to the road network, effectively trading accuracy for speed of computation. In contrast, the two global algorithms compare the entire trajectory to candidate paths in the road network. The algorithms are evaluated in terms of (i) their running time and (ii) the quality of their matching result. Two novel quality measures utilizing the Fréchet distance are introduced and subsequently used in an experimental evaluation to assess the quality of matching real tracking data to a road network.", "title": "" }, { "docid": "441f80a25e7a18760425be5af1ab981d", "text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.", "title": "" }, { "docid": "f77495366909b9713463bebf2b4ff2fc", "text": "This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.", "title": "" }, { "docid": "c8e4450de63dc54b5802566d589d4cdc", "text": "BACKGROUND\nMore than 1.5 million Americans have Parkinson disease (PD), and this figure is expected to rise as the population ages. However, the dental literature offers little information about the illness.\n\n\nTYPES OF STUDIES REVIEWED\nThe authors conducted a MEDLINE search using the key terms \"Parkinson's disease,\" \"medical management\" and \"dentistry.\" They selected contemporaneous articles published in peer-reviewed journals and gave preference to articles reporting randomized controlled trials.\n\n\nRESULTS\nPD is a progressive neurodegenerative disorder caused by loss of dopaminergic and nondopaminergic neurons in the brain. These deficits result in tremor, slowness of movement, rigidity, postural instability and autonomic and behavioral dysfunction. Treatment consists of administering medications that replace dopamine, stimulate dopamine receptors and modulate other neurotransmitter systems.\n\n\nCLINICAL IMPLICATIONS\nOral health may decline because of tremors, muscle rigidity and cognitive deficits. The dentist should consult with the patient's physician to establish the patient's competence to provide informed consent and to determine the presence of comorbid illnesses. Scheduling short morning appointments that begin 90 minutes after administration of PD medication enhances the patient's ability to cooperate with care. Inclination of the dental chair at 45 degrees, placement of a bite prop, use of a rubber dam and high-volume oral evacuation enhance airway protection. To avoid adverse drug interactions with levodopa and entacapone, the dentist should limit administration of local anesthetic agents to three cartridges of 2 percent lidocaine with 1:100,000 epinephrine per half hour, and patients receiving selegiline should not be given agents containing epinephrine or levonordefrin. The dentist should instruct the patient and the caregiver in good oral hygiene techniques.", "title": "" }, { "docid": "acf6a62e487b79fc0500aa5e6bbb0b0b", "text": "This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.", "title": "" }, { "docid": "e573d85271e3f3cc54b774de8a5c6dd9", "text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.", "title": "" }, { "docid": "fb1e23b956c5b60f581f9a32001a9783", "text": "Deep convolutional neural networks (CNNs) have recently shown very high accuracy in a wide range of cognitive tasks, and due to this, they have received significant interest from the researchers. Given the high computational demands of CNNs, custom hardware accelerators are vital for boosting their performance. The high energy efficiency, computing capabilities and reconfigurability of FPGA make it a promising platform for hardware acceleration of CNNs. In this paper, we present a survey of techniques for implementing and optimizing CNN algorithms on FPGA. We organize the works in several categories to bring out their similarities and differences. This paper is expected to be useful for researchers in the area of artificial intelligence, hardware architecture and system design.", "title": "" } ]
scidocsrr
3cbad897852a4f69f4b5b1cb25a797df
Using Neo4j graph database in social network analysis
[ { "docid": "b9f7c3cbf856ff9a64d7286c883e2640", "text": "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.", "title": "" } ]
[ { "docid": "9d5d667c6d621bd90a688c993065f5df", "text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.", "title": "" }, { "docid": "6f4e5448f956017c39c1727e0eb5de7b", "text": "Recently, community search over graphs has attracted significant attention and many algorithms have been developed for finding dense subgraphs from large graphs that contain given query nodes. In applications such as analysis of protein protein interaction (PPI) networks, citation graphs, and collaboration networks, nodes tend to have attributes. Unfortunately, most previously developed community search algorithms ignore these attributes and result in communities with poor cohesion w.r.t. their node attributes. In this paper, we study the problem of attribute-driven community search, that is, given an undirected graph G where nodes are associated with attributes, and an input query Q consisting of nodes Vq and attributes Wq , find the communities containing Vq , in which most community members are densely inter-connected and have similar attributes. We formulate our problem of finding attributed truss communities (ATC), as finding all connected and close k-truss subgraphs containing Vq, that are locally maximal and have the largest attribute relevance score among such subgraphs. We design a novel attribute relevance score function and establish its desirable properties. The problem is shown to be NP-hard. However, we develop an efficient greedy algorithmic framework, which finds a maximal k-truss containing Vq, and then iteratively removes the nodes with the least popular attributes and shrinks the graph so as to satisfy community constraints. We also build an elegant index to maintain the known k-truss structure and attribute information, and propose efficient query processing algorithms. Extensive experiments on large real-world networks with ground-truth communities shows the efficiency and effectiveness of our proposed methods.", "title": "" }, { "docid": "b9e6d6d2625a713e8fa7491bc1b24223", "text": "Percutaneous radiofrequency ablation (RFA) is becoming a standard minimally invasive clinical procedure for the treatment of liver tumors. However, planning the applicator placement such that the malignant tissue is completely destroyed, is a demanding task that requires considerable experience. In this work, we present a fast GPU-based real-time approximation of the ablation zone incorporating the cooling effect of liver vessels. Weighted distance fields of varying RF applicator types are derived from complex numerical simulations to allow a fast estimation of the ablation zone. Furthermore, the heat-sink effect of the cooling blood flow close to the applicator's electrode is estimated by means of a preprocessed thermal equilibrium representation of the liver parenchyma and blood vessels. Utilizing the graphics card, the weighted distance field incorporating the cooling blood flow is calculated using a modular shader framework, which facilitates the real-time visualization of the ablation zone in projected slice views and in volume rendering. The proposed methods are integrated in our software assistant prototype for planning RFA therapy. The software allows the physician to interactively place virtual RF applicator models. The real-time visualization of the corresponding approximated ablation zone facilitates interactive evaluation of the tumor coverage in order to optimize the applicator's placement such that all cancer cells are destroyed by the ablation.", "title": "" }, { "docid": "874876e2ed9e4a2ba044cf62d408da55", "text": "It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution.\n The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring's true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.", "title": "" }, { "docid": "c8bfa845f5eaaeeab5bcf7bdc601bfb5", "text": "Completely labeled pathology datasets are often challenging and time-consuming to obtain. Semi-supervised learning (SSL) methods are able to learn from fewer labeled data points with the help of a large number of unlabeled data points. In this paper, we investigated the possibility of using clustering analysis to identify the underlying structure of the data space for SSL. A cluster-then-label method was proposed to identify high-density regions in the data space which were then used to help a supervised SVM in finding the decision boundary. We have compared our method with other supervised and semi-supervised state-of-the-art techniques using two different classification tasks applied to breast pathology datasets. We found that compared with other state-of-the-art supervised and semi-supervised methods, our SSL method is able to improve classification performance when a limited number of labeled data instances are made available. We also showed that it is important to examine the underlying distribution of the data space before applying SSL techniques to ensure semi-supervised learning assumptions are not violated by the data.", "title": "" }, { "docid": "d3783bcc47ed84da2c54f5f536450a0c", "text": "In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nyström Online Gradient Descent (NOGD) algorithm that applies the Nyström method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches.", "title": "" }, { "docid": "61f5ce7063a35192c7d736a648561e3e", "text": "BoF statistic-based local space-time features action representation is very popular for human action recognition due to its simplicity. However, the problem of large quantization error and weak semantic representation decrease traditional BoF model’s discriminant ability when applied to human action recognition in realistic scenes. To deal with the problems, we investigate the generalization ability of BoF framework for action representation as well as more effective feature encoding about high-level semantics. Towards this end, we present two-layer hierarchical codebook learning framework for human action classification in realistic scenes. In the first-layer action modelling, superpixel GMM model is developed to filter out noise features in STIP extraction resulted from cluttered background, and class-specific learning strategy is employed on the refined STIP feature space to construct compact and descriptive in-class action codebooks. In the second-layer of action representation, LDA-Km learning algorithm is proposed for feature dimensionality reduction and for acquiring more discriminative inter-class action codebook for classification. We take advantage of hierarchical framework’s representational power and the efficiency of BoF model to boost recognition performance in realistic scenes. In experiments, the performance of our proposed method is evaluated on four benchmark datasets: KTH, YouTube (UCF11), UCF Sports and Hollywood2. Experimental results show that the proposed approach achieves improved recognition accuracy than the baseline method. Comparisons with state-of-the-art works demonstrates the competitive ability both in recognition performance and time complexity.", "title": "" }, { "docid": "b0903440893a25a91c575fd96b5524fa", "text": "With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.", "title": "" }, { "docid": "8cd666c0796c0fe764bc8de0d7a20fa3", "text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.", "title": "" }, { "docid": "c65833b67b878e65ce617fc37c10394b", "text": "A high performance texture compression technique is introduced, which exploits the DXT5 format available on today's graphics cards. The compression technique provides a very good middle ground between DXT1 compression and no compression. Using the DXT5 format, textures consume twice the amount of memory of DXT1-compressed textures (a 4:1 compression ratio instead of 8:1). In return, however, the technique provides a significant gain in quality, and for most images, there is almost no noticeable loss in quality. In particular there is a consistent gain in RGB-PSNR of 6 dB or more for the Kodak Lossless True Color Image Suite. Furthermore, the technique allows for both real-time texture decompression during rasterization on current graphics cards, and high quality realtime compression on the CPU and GPU.", "title": "" }, { "docid": "eee0bc6ee06dce38efbc89659771f720", "text": "In a data center, an IO from an application to distributed storage traverses not only the network, but also several software stages with diverse functionality. This set of ordered stages is known as the storage or IO stack. Stages include caches, hypervisors, IO schedulers, file systems, and device drivers. Indeed, in a typical data center, the number of these stages is often larger than the number of network hops to the destination. Yet, while packet routing is fundamental to networks, no notion of IO routing exists on the storage stack. The path of an IO to an endpoint is predetermined and hard-coded. This forces IO with different needs (e.g., requiring different caching or replica selection) to flow through a one-size-fits-all IO stack structure, resulting in an ossified IO stack. This paper proposes sRoute, an architecture that provides a routing abstraction for the storage stack. sRoute comprises a centralized control plane and “sSwitches” on the data plane. The control plane sets the forwarding rules in each sSwitch to route IO requests at runtime based on application-specific policies. A key strength of our architecture is that it works with unmodified applications and VMs. This paper shows significant benefits of customized IO routing to data center tenants (e.g., a factor of ten for tail IO latency, more than 60% better throughput for a customized replication protocol and a factor of two in throughput for customized caching).", "title": "" }, { "docid": "98b3f17de080aed8bce62e1c00f66605", "text": "While strong progress has been made in image captioning recently, machine and human captions are still quite distinct. This is primarily due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans – rightfully so – generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not explicitly considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing ground-truth captions to generating a set of captions that is indistinguishable from human written captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions that are significantly less biased and better match the global uni-, bi- and tri-gram distributions of the human captions.", "title": "" }, { "docid": "c945ef3a4e223a70212413b4948fcbc0", "text": "Text generation is a fundamental building block in natural language processing tasks. Existing sequential models performs autoregression directly over the text sequence and have difficulty generating long sentences of complex structures. This paper advocates a simple approach that treats sentence generation as a tree-generation task. By explicitly modelling syntactic structures in a constituent syntactic tree and performing topdown, breadth-first tree generation, our model fixes dependencies appropriately and performs implicit global planning. This is in contrast to transition-based depth-first generation process, which has difficulty dealing with incomplete texts when parsing and also does not incorporate future contexts in planning. Our preliminary results on two generation tasks and one parsing task demonstrate that this is an effective strategy.", "title": "" }, { "docid": "fb5f6eeff54e54034970d6bcaaacb6ec", "text": "Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which Switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks.", "title": "" }, { "docid": "5e946f2a15b5d9c663d85cd12bc3d9fc", "text": "Individual differences in young children's understanding of others' feelings and in their ability to explain human action in terms of beliefs, and the earlier correlates of these differences, were studied with 50 children observed at home with mother and sibling at 33 months, then tested at 40 months on affective-labeling, perspective-taking, and false-belief tasks. Individual differences in social understanding were marked; a third of the children offered explanations of actions in terms of false belief, though few predicted actions on the basis of beliefs. These differences were associated with participation in family discourse about feelings and causality 7 months earlier, verbal fluency of mother and child, and cooperative interaction with the sibling. Differences in understanding feelings were also associated with the discourse measures, the quality of mother-sibling interaction, SES, and gender, with girls more successful than boys. The results support the view that discourse about the social world may in part mediate the key conceptual advances reflected in the social cognition tasks; interaction between child and sibling and the relationships between other family members are also implicated in the growth of social understanding.", "title": "" }, { "docid": "295212e614cc361b1a5fdd320d39f68b", "text": "Aiming to meet the explosive growth of mobile data traffic and reduce the network congestion, we study Time Dependent Adaptive Pricing (TDAP) with threshold policies to motivate users to shift their Internet access from peak hours to off-peak hours. With the proposed TDAP scheme, Internet Service Providers (ISPs) will be able to use less network capacity to provide users Internet access service with the same QoS. Simulation and analysis are carried out to investigate the performance of the proposed TDAP scheme based on the real Internet traffic pattern.", "title": "" }, { "docid": "ae0d8d1dec27539502cd7e3030a3fe42", "text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.", "title": "" }, { "docid": "d18a2130df6de673362fe1c347985974", "text": "Malignant catarrhal fever (MCF) is a fatal herpesvirus infection of domestic and wild ruminants, with a short and dramatic clinical course characterized primarily by high fever, severe depression, swollen lymph nodes, salivation, diarrhea, dermatitis, neurological disorders, and ocular lesions often leading to blindness. In the present study, fatal clinical cases of sheep associated malignant catarrhal fever (SA-MCF) were identified in cattle in the state of Karnataka. These cases were initially presented with symptoms of diarrhea, respiratory distress, conjunctivitis, and nasal discharges. Laboratory diagnosis confirmed the detection of ovine herpesvirus-2 (OvHV-2) genome in the peripheral blood samples of two ailing animals. The blood samples collected subsequently from sheep of the neighboring areas also showed presence of OvHV-2 genome indicating a nidus of infection in the region. The positive test results were further confirmed by nucleotide sequencing of the OIE approved portion of tegument gene as well as complete ORF8 region of the OvHV-2 genome. Phylogenetic analysis based on the sequence of the latter region indicated close genetic relationship with other OvHV-2 reported elsewhere in the world.", "title": "" }, { "docid": "ec181b897706d101136dcbcef6e84de9", "text": "Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.", "title": "" } ]
scidocsrr
ca9b76b73525ec2ae6144b049ddb873e
A New Lane Line Segmentation and Detection Method based on Inverse Perspective Mapping
[ { "docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a", "text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" } ]
[ { "docid": "e3ccebbfb328e525c298816950d135a5", "text": "It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions; whereas collecting positive data can be automated and done safely based on the robot’s own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as ”GoNoGo” costmaps) for robot path planning using visual data only.", "title": "" }, { "docid": "5e5c2619ea525ef77cbdaabb6a21366f", "text": "Data profiling is an information analysis technique on data stored inside database. Data profiling purpose is to ensure data quality by detecting whether the data in the data source compiles with the established business rules. Profiling could be performed using multiple analysis techniques depending on the data element to be analyzed. The analysis process also influenced by the data profiling tool being used. This paper describes tehniques of profiling analysis using open-source tool OpenRefine. The method used in this paper is case study method, using data retrieved from BPOM Agency website for checking commodity traditional medicine permits. Data attributes that became the main concern of this paper is Nomor Ijin Edar (NIE / distribution permit number) and registrar company name. The result of this research were suggestions to improve data quality on NIE and company name, which consists of data cleansing and improvement to business process and applications.", "title": "" }, { "docid": "d9d68377bb73d7abca39455b49abe8b7", "text": "A boosting-based method of learning a feed-forward artificial neural network (ANN) with a single layer of hidden neurons and a single output neuron is presented. Initially, an algorithm called Boostron is described that learns a single-layer perceptron using AdaBoost and decision stumps. It is then extended to learn weights of a neural network with a single hidden layer of linear neurons. Finally, a novel method is introduced to incorporate non-linear activation functions in artificial neural network learning. The proposed method uses series representation to approximate non-linearity of activation functions, learns the coefficients of nonlinear terms by AdaBoost. It adapts the network parameters by a layer-wise iterative traversal of neurons and an appropriate reduction of the problem. A detailed performances comparison of various neural network models learned the proposed methods and those learned using the Least Mean Squared learning (LMS) and the resilient back-propagation (RPROP) is provided in this paper. Several favorable results are reported for 17 synthetic and real-world datasets with different degrees of difficulties for both binary and multi-class problems. Email addresses: mubasher.baig@nu.edu.pk, awais@lums.edu.pk (Mirza M. Baig, Mian. M. Awais), alfy@kfupm.edu.sa (El-Sayed M. El-Alfy) Preprint submitted to Neurocomputing March 9, 2017", "title": "" }, { "docid": "9da1449675af42a2fc75ba8259d22525", "text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud", "title": "" }, { "docid": "a8670bebe828e07111f962d72c5909aa", "text": "Personalities are general properties of humans and other animals. Different personality traits are phenotypically correlated, and heritabilities of personality traits have been reported in humans and various animals. In great tits, consistent heritable differences have been found in relation to exploration, which is correlated with various other personality traits. In this paper, we investigate whether or not risk-taking behaviour is part of these avian personalities. We found that (i) risk-taking behaviour is repeatable and correlated with exploratory behaviour in wild-caught hand-reared birds, (ii) in a bi-directional selection experiment on 'fast' and 'slow' early exploratory behaviour, bird lines tend to differ in risk-taking behaviour, and (iii) within-nest variation of risk-taking behaviour is smaller than between-nest variation. To show that risk-taking behaviour has a genetic component in a natural bird population, we bred great tits in the laboratory and artificially selected 'high' and 'low' risk-taking behaviour for two generations. Here, we report a realized heritability of 19.3 +/- 3.3% (s.e.m.) for risk-taking behaviour. With these results we show in several ways that risk-taking behaviour is linked to exploratory behaviour, and we therefore have evidence for the existence of avian personalities. Moreover, we prove that there is heritable variation in more than one correlated personality trait in a natural population, which demonstrates the potential for correlated evolution.", "title": "" }, { "docid": "9aa21d2b6ea52e3e1bdd3e2795d1bf03", "text": "Dining cryptographers networks (or DC-nets) are a privacypreserving primitive devised by Chaum for anonymous message publication. A very attractive feature of the basic DC-net is its non-interactivity. Subsequent to key establishment, players may publish their messages in a single broadcast round, with no player-to-player communication. This feature is not possible in other privacy-preserving tools like mixnets. A drawback to DC-nets, however, is that malicious players can easily jam them, i.e., corrupt or block the transmission of messages from honest parties, and may do so without being traced. Several researchers have proposed valuable methods of detecting cheating players in DC-nets. This is usually at the cost, however, of multiple broadcast rounds, even in the optimistic case, and often of high computational and/or communications overhead, particularly for fault recovery. We present new DC-net constructions that simultaneously achieve noninteractivity and high-probability detection and identification of cheating players. Our proposals are quite efficient, imposing a basic cost that is linear in the number of participating players. Moreover, even in the case of cheating in our proposed system, just one additional broadcast round suffices for full fault recovery. Among other tools, our constructions employ bilinear maps, a recently popular cryptographic technique for reducing communication complexity.", "title": "" }, { "docid": "be8efe56e56bccf1668faa7b9c0a6e57", "text": "Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e.g. genre classification, mood detection, and chord recognition. However, the process of learning and prediction is little understood, particularly when it is applied to spectrograms. We introduce auralisation of a CNN to understand its underlying mechanism, which is based on a deconvolution procedure introduced in [2]. Auralisation of a CNN is converting the learned convolutional features that are obtained from deconvolution into audio signals. In the experiments and discussions, we explain trained features of a 5-layer CNN based on the deconvolved spectrograms and auralised signals. The pairwise correlations per layers with varying different musical attributes are also investigated to understand the evolution of the learnt features. It is shown that in the deep layers, the features are learnt to capture textures, the patterns of continuous distributions, rather than shapes of lines.", "title": "" }, { "docid": "133a48a5c6c568d33734bd95d4aec0b2", "text": "The topic information of conversational content is important for continuation with communication, so topic detection and tracking is one of important research. Due to there are many topic transform occurring frequently in long time communication, and the conversation maybe have many topics, so it's important to detect different topics in conversational content. This paper detects topic information by using agglomerative clustering of utterances and Dynamic Latent Dirichlet Allocation topic model, uses proportion of verb and noun to analyze similarity between utterances and cluster all utterances in conversational content by agglomerative clustering algorithm. The topic structure of conversational content is friability, so we use speech act information and gets the hypernym information by E-HowNet that obtains robustness of word categories. Latent Dirichlet Allocation topic model is used to detect topic in file units, it just can detect only one topic if uses it in conversational content, because of there are many topics in conversational content frequently, and also uses speech act information and hypernym information to train the latent Dirichlet allocation models, then uses trained models to detect different topic information in conversational content. For evaluating the proposed method, support vector machine is developed for comparison. According to the experimental results, we can find the proposed method outperforms the approach based on support vector machine in topic detection and tracking in spoken dialogue.", "title": "" }, { "docid": "09985252933e82cf1615dabcf1e6d9a2", "text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.", "title": "" }, { "docid": "f55ac9e319ad8b9782a34251007a5d06", "text": "The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.", "title": "" }, { "docid": "b3fc899c49ceb699f62b43bb0808a1b2", "text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.", "title": "" }, { "docid": "c0b000176bba658ef702872f0174b602", "text": "Distributed Denial of Service (DDoS) attacks represent a major threat to uninterrupted and efficient Internet service. In this paper, we empirically evaluate several major information metrics, namely, Hartley entropy, Shannon entropy, Renyi’s entropy, generalized entropy, Kullback–Leibler divergence and generalized information distance measure in their ability to detect both low-rate and high-rate DDoS attacks. These metrics can be used to describe characteristics of network traffic data and an appropriate metric facilitates building an effective model to detect both low-rate and high-rate DDoS attacks. We use MIT Lincoln Laboratory, CAIDA and TUIDS DDoS datasets to illustrate the efficiency and effectiveness of each metric for DDoS detection. © 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a673945eaa9b5a350f7d7421c45ac238", "text": "The intention of this study was to identify the bacterial pathogens infecting Oreochromis niloticus (Nile tilapia) and Clarias gariepinus (African catfish), and to establish the antibiotic susceptibility of fish bacteria in Uganda. A total of 288 fish samples from 40 fish farms (ponds, cages, and tanks) and 8 wild water sites were aseptically collected and bacteria isolated from the head kidney, liver, brain and spleen. The isolates were identified by their morphological characteristics, conventional biochemical tests and Analytical Profile Index test kits. Antibiotic susceptibility of selected bacteria was determined by the Kirby-Bauer disc diffusion method. The following well-known fish pathogens were identified at a farm prevalence of; Aeromonas hydrophila (43.8%), Aeromonas sobria (20.8%), Edwardsiella tarda (8.3%), Flavobacterium spp. (4.2%) and Streptococcus spp. (6.3%). Other bacteria with varying significance as fish pathogens were also identified including Plesiomonas shigelloides (25.0%), Chryseobacterium indoligenes (12.5%), Pseudomonas fluorescens (10.4%), Pseudomonas aeruginosa (4.2%), Pseudomonas stutzeri (2.1%), Vibrio cholerae (10.4%), Proteus spp. (6.3%), Citrobacter spp. (4.2%), Klebsiella spp. (4.2%) Serratia marcescens (4.2%), Burkholderia cepacia (2.1%), Comamonas testosteroni (8.3%) and Ralstonia picketti (2.1%). Aeromonas spp., Edwardsiella tarda and Streptococcus spp. were commonly isolated from diseased fish. Aeromonas spp. (n = 82) and Plesiomonas shigelloides (n = 73) were evaluated for antibiotic susceptibility. All isolates tested were susceptible to at-least ten (10) of the fourteen antibiotics evaluated. High levels of resistance were however expressed by all isolates to penicillin, oxacillin and ampicillin. This observed resistance is most probably intrinsic to those bacteria, suggesting minimal levels of acquired antibiotic resistance in fish bacteria from the study area. To our knowledge, this is the first study to establish the occurrence of several bacteria species infecting fish; and to determine antibiotic susceptibility of fish bacteria in Uganda. The current study provides baseline information for future reference and fish disease management in the country.", "title": "" }, { "docid": "b3923d263c230f527f06b85275522f60", "text": "Cloud computing is a relatively new concept that offers the potential to deliver scalable elastic services to many. The notion of pay-per use is attractive and in the current global recession hit economy it offers an economic solution to an organizations’ IT needs. Computer forensics is a relatively new discipline born out of the increasing use of computing and digital storage devices in criminal acts (both traditional and hi-tech). Computer forensic practices have been around for several decades and early applications of their use can be charted back to law enforcement and military investigations some 30 years ago. In the last decade computer forensics has developed in terms of procedures, practices and tool support to serve the law enforcement community. However, it now faces possibly its greatest challenges in dealing with cloud computing. Through this paper we explore these challenges and suggest some possible solutions.", "title": "" }, { "docid": "169ed8d452a7d0dd9ecf90b9d0e4a828", "text": "Technology is common in the domain of knowledge distribution, but it rarely enhances the process of knowledge use. Distribution delivers knowledge to the potential user's desktop but cannot dictate what he or she does with it thereafter. It would be interesting to envision technologies that help to manage personal knowledge as it applies to decisions and actions. The viewpoints about knowledge vary from individual, community, society, personnel development or national development. Personal Knowledge Management (PKM) integrates Personal Information Management (PIM), focused on individual skills, with Knowledge Management (KM). KM Software is a subset of Enterprise content management software and which contains a range of software that specialises in the way information is collected, stored and/or accessed. This article focuses on KM skills, PKM and PIM Open Sources Software, Social Personal Management and also highlights the Comparison of knowledge base management software and its use.", "title": "" }, { "docid": "7095bf529a060dd0cd7eeb2910998cf8", "text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable", "title": "" }, { "docid": "409d104fa3e992ac72c65b004beaa963", "text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.", "title": "" }, { "docid": "d6bbec8d1426cacba7f8388231f04add", "text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.", "title": "" }, { "docid": "6ec0b302a485b787b3d21b89f79a0110", "text": "This paper draws on primary and secondary data to propose a taxonomy of strategies, or \"schools.\" for knowledge management. The primary purpose of this fratiiework is to guide executives on choices to initiate knowledge tnanagement projects according to goals, organizational character, and technological, behavioral, or economic biases. It may also be useful to teachers in demonstrating the scope of knowledge management and to researchers in generating propositions for further study.", "title": "" }, { "docid": "945bf7690169b5f2e615324fb133bc19", "text": "Exponential growth in the number of scientific publications yields the need for effective automatic analysis of rhetorical aspects of scientific writing. Acknowledging the argumentative nature of scientific text, in this work we investigate the link between the argumentative structure of scientific publications and rhetorical aspects such as discourse categories or citation contexts. To this end, we (1) augment a corpus of scientific publications annotated with four layers of rhetoric annotations with argumentation annotations and (2) investigate neural multi-task learning architectures combining argument extraction with a set of rhetorical classification tasks. By coupling rhetorical classifiers with the extraction of argumentative components in a joint multi-task learning setting, we obtain significant performance gains for different rhetorical analysis tasks.", "title": "" } ]
scidocsrr
657a221698b7b78cc4ded97765ac72ad
FPGA-Based Test-Bench for Resonant Inverter Load Characterization
[ { "docid": "87f0810dde0447cea2cff24149b49e0a", "text": "The design of new power-converter solutions optimized for specific applications requires, at a certain step, the design and implementation of several prototypes in order to verify the converter operation. This is a time-consuming task which also involves a significant economical cost. The aim of this paper is to present a versatile power electronics architecture which provides a tool to make the implementation and evaluation of new power converters straightforward. The adopted platform includes a versatile control architecture and a modular power electronics hardware solution. The control architecture is a field-programmable-gate-array-based system-on-programmable-chip solution which combines the advantages of the processor-firmware versatility and the effectiveness of ad hoc paralleled digital hardware. Moreover, the modular power electronics hardware provides a fast method to reconfigure the power-converter topology. The architecture proposed in this paper has been applied to the development of power converters for domestic induction heating, although it can be extended to other applications with similar requirements. A complete development test bench has been carried out, and some experimental results are shown in order to verify the proper system operation.", "title": "" }, { "docid": "9123ff1c2e6c52bf9a16a6ed4c67f151", "text": "Domestic induction cookers operation is based on a resonant inverter which supplies medium-frequency currents (20-100 kHz) to an inductor, which heats up the pan. The variable load that is inherent to this application requires the use of a reliable and load-adaptive control algorithm. In addition, a wide output power range is required to get a satisfactory user performance. In this paper, a control algorithm to cover the variety of loads and the output power range is proposed. The main design criteria are efficiency, power balance, acoustic noise, flicker emissions, and user performance. As a result of the analysis, frequency limit and power level limit algorithms are proposed based on square wave and pulse density modulations. These have been implemented in a field-programmable gate array, including output power feedback and mains-voltage zero-cross-detection circuitry. An experimental verification has been performed using a commercial induction heating inverter. This provides a convenient experimental test bench to analyze the viability of the proposed algorithm.", "title": "" } ]
[ { "docid": "99e604a84b6d56d2f42efe7b0a2ddec8", "text": "This work aims at providing a RLCG modeling ofthe 10 µm fine-pitch microbump type interconnects in the 100 MHz-40 GHz frequency band based on characterization approach. RF measurements are performed on two-port test structures within a short-loop with chip to wafer assembly using the fine pitch 10 µm Cu-pillar on a 10 Ohm.cm substrate resistivity silicon interposer. Accuracy is obtained thanks to a coplanar transmission line using 44 Cu-pillar transitions. To the author knowledge, it is the first time that RLCG modeling of fine-pitch Cu-pillar is extracted from experimental results. Another goal of this work is to get a better understanding of the main physical effects over a wide frequency range, especially concerning the key parameter of fine pitch Cu-pillar, i.e. the resistance. Finally, analysis based on the proposed RLCG modeling are performed to optimize over frequency the resistive interposer-to-chip link thanks to process modifications mitigating high frequency parasitic effects.", "title": "" }, { "docid": "d4cf614c352b3bbef18d7f219a3da2d1", "text": "In recent years there has been growing interest on the occurrence and the fate of pharmaceuticals in the aquatic environment. Nevertheless, few data are available covering the fate of the pharmaceuticals in the water/sediment compartment. In this study, the environmental fate of 10 selected pharmaceuticals and pharmaceutical metabolites was investigated in water/sediment systems including both the analysis of water and sediment. The experiments covered the application of four 14C-labeled pharmaceuticals (diazepam, ibuprofen, iopromide, and paracetamol) for which radio-TLC analysis was used as well as six nonlabeled compounds (carbamazepine, clofibric acid, 10,11-dihydro-10,11-dihydroxycarbamazepine, 2-hydroxyibuprofen, ivermectin, and oxazepam), which were analyzed via LC-tandem MS. Ibuprofen, 2-hydroxyibuprofen, and paracetamol displayed a low persistence with DT50 values in the water/sediment system < or =20 d. The sediment played a key role in the elimination of paracetamol due to the rapid and extensive formation of bound residues. A moderate persistence was found for ivermectin and oxazepam with DT50 values of 15 and 54 d, respectively. Lopromide, for which no corresponding DT50 values could be calculated, also exhibited a moderate persistence and was transformed into at least four transformation products. For diazepam, carbamazepine, 10,11-dihydro-10,11-dihydroxycarbamazepine, and clofibric acid, system DT90 values of >365 d were found, which exhibit their high persistence in the water/sediment system. An elevated level of sorption onto the sediment was observed for ivermectin, diazepam, oxazepam, and carbamazepine. Respective Koc values calculated from the experimental data ranged from 1172 L x kg(-1) for ivermectin down to 83 L x kg(-1) for carbamazepine.", "title": "" }, { "docid": "f69b9816e8f8716d12eaa43e3d1222f4", "text": "BACKGROUND\nIn 1986, the European Organization for Research and Treatment of Cancer (EORTC) initiated a research program to develop an integrated, modular approach for evaluating the quality of life of patients participating in international clinical trials.\n\n\nPURPOSE\nWe report here the results of an international field study of the practicality, reliability, and validity of the EORTC QLQ-C30, the current core questionnaire. The QLQ-C30 incorporates nine multi-item scales: five functional scales (physical, role, cognitive, emotional, and social); three symptom scales (fatigue, pain, and nausea and vomiting); and a global health and quality-of-life scale. Several single-item symptom measures are also included.\n\n\nMETHODS\nThe questionnaire was administered before treatment and once during treatment to 305 patients with nonresectable lung cancer from centers in 13 countries. Clinical variables assessed included disease stage, weight loss, performance status, and treatment toxicity.\n\n\nRESULTS\nThe average time required to complete the questionnaire was approximately 11 minutes, and most patients required no assistance. The data supported the hypothesized scale structure of the questionnaire with the exception of role functioning (work and household activities), which was also the only multi-item scale that failed to meet the minimal standards for reliability (Cronbach's alpha coefficient > or = .70) either before or during treatment. Validity was shown by three findings. First, while all interscale correlations were statistically significant, the correlation was moderate, indicating that the scales were assessing distinct components of the quality-of-life construct. Second, most of the functional and symptom measures discriminated clearly between patients differing in clinical status as defined by the Eastern Cooperative Oncology Group performance status scale, weight loss, and treatment toxicity. Third, there were statistically significant changes, in the expected direction, in physical and role functioning, global quality of life, fatigue, and nausea and vomiting, for patients whose performance status had improved or worsened during treatment. The reliability and validity of the questionnaire were highly consistent across the three language-cultural groups studied: patients from English-speaking countries, Northern Europe, and Southern Europe.\n\n\nCONCLUSIONS\nThese results support the EORTC QLQ-C30 as a reliable and valid measure of the quality of life of cancer patients in multicultural clinical research settings. Work is ongoing to examine the performance of the questionnaire among more heterogenous patient samples and in phase II and phase III clinical trials.", "title": "" }, { "docid": "adf0a2cad66a7e48c16f02ef1bc4e9da", "text": "Recently, several techniques have been explored to detect unusual behaviour in surveillance videos. Nevertheless, few studies leverage features from pre-trained CNNs and none of then present a comparison of features generate by different models. Motivated by this gap, we compare features extracted by four state-of-the-art image classification networks as a way of describing patches from security video frames. We carry out experiments on the Ped1 and Ped2 datasets and analyze the usage of different feature normalization techniques. Our results indicate that choosing the appropriate normalization is crucial to improve the anomaly detection performance when working with CNN features. Also, in the Ped2 dataset our approach was able to obtain results comparable to the ones of several state-of-the-art methods. Lastly, as our method only considers the appearance of each frame, we believe that it can be combined with approaches that focus on motion patterns to further improve performance.", "title": "" }, { "docid": "1cbd13de915d2a4cedd736345ebb2134", "text": "This paper deals with the design and implementation of a nonlinear control algorithm for the attitude tracking of a four-rotor helicopter known as quadrotor. This algorithm is based on the second order sliding mode technique known as Super-Twisting Algorithm (STA) which is able to ensure robustness with respect to bounded external disturbances. In order to show the effectiveness of the proposed controller, experimental tests were carried out on a real quadrotor. The obtained results show the good performance of the proposed controller in terms of stabilization, tracking and robustness with respect to external disturbances.", "title": "" }, { "docid": "a6d26826ee93b3b5dec8282d0c632f8e", "text": "Superficial Acral Fibromyxoma is a rare tumor of soft tissues. It is a relatively new entity described in 2001 by Fetsch et al. It probably represents a fibrohistiocytic tumor with less than 170 described cases. We bring a new case of SAF on the 5th toe of the right foot, in a 43-year-old woman. After surgical excision with safety margins which included the nail apparatus, it has not recurred (22 months of follow up). We carried out a review of the location of all SAF published up to the present day.", "title": "" }, { "docid": "f3278416976069448fd7e6d0ea797dc6", "text": "Data Type (ADT), 45 abstraction mechanisms, 134 active sever pages, 252 affine transformation, 227 aggregation, 83, 125 anaglyphic stereo, 223 Apache HTTP, 250 ArcView 3D Analyst, 18 association, 84 ATKIS, 73 AutoCad, 2 AVS, 149 backward pass, 173 Bentley, 252 boolean, 199 Borgefors DT, 153 Boundary Representation (BR), 55 Boundary representation (B-rep), 17 CAD, 1, 4, 224 cartesian coordinate, 41 cell Complex, 66 CGI, 247 chamfer 3-4, 154 chamfer 3-4-5, 172 chamfer 5-7-11, 154 Classification, 82, 118, 135 Client-Server, 232 COBRA, 249 computer graphics, 224 conceptual data model, 45 conceptual design, 48 constrained triangulation, 94, 210 Constructive Solid Geometry (CSG), 13, 17, 55 Contouring, 190 contouring algorithm, 190 Cortona, 251 CSG, 34 DB2, 236 dBASE, 109 DBMS, 46, 228 Delaunay triangulation, 164 DEMViewer, 246 dependency diagram, 111 depth sorting algorithm, 224", "title": "" }, { "docid": "1f2f6aab0e3c813392ecab46cdc171b5", "text": "Theory of mind (ToM) refers to the ability to represent one's own and others' cognitive and affective mental states. Recent imaging studies have aimed to disentangle the neural networks involved in cognitive as opposed to affective ToM, based on clinical observations that the two can functionally dissociate. Due to large differences in stimulus material and task complexity findings are, however, inconclusive. Here, we investigated the neural correlates of cognitive and affective ToM in psychologically healthy male participants (n = 39) using functional brain imaging, whereby the same set of stimuli was presented for all conditions (affective, cognitive and control), but associated with different questions prompting either a cognitive or affective ToM inference. Direct contrasts of cognitive versus affective ToM showed that cognitive ToM recruited the precuneus and cuneus, as well as regions in the temporal lobes bilaterally. Affective ToM, in contrast, involved a neural network comprising prefrontal cortical structures, as well as smaller regions in the posterior cingulate cortex and the basal ganglia. Notably, these results were complemented by a multivariate pattern analysis (leave one study subject out), yielding a classifier with an accuracy rate of more than 85% in distinguishing between the two ToM-conditions. The regions contributing most to successful classification corresponded to those found in the univariate analyses. The study contributes to the differentiation of neural patterns involved in the representation of cognitive and affective mental states of others.", "title": "" }, { "docid": "b4f47ddd8529fe3859869b9e7c85bb2f", "text": "This paper studies the problem of building text classifiers using positive and unlabeled examples. The key feature of this problem is that there is no negative example for learning. Recently, a few techniques for solving this problem were proposed in the literature. These techniques are based on the same idea, which builds a classifier in two steps. Each existing technique uses a different method for each step. In this paper, we first introduce some new methods for the two steps, and perform a comprehensive evaluation of all possible combinations of methods of the two steps. We then propose a more principled approach to solving the problem based on a biased formulation of SVM, and show experimentally that it is more accurate than the existing techniques.", "title": "" }, { "docid": "74fb6f153fe8d6f8eac0f18c1040a659", "text": "The DAVID Gene Functional Classification Tool http://david.abcc.ncifcrf.gov uses a novel agglomeration algorithm to condense a list of genes or associated biological terms into organized classes of related genes or biology, called biological modules. This organization is accomplished by mining the complex biological co-occurrences found in multiple sources of functional annotation. It is a powerful method to group functionally related genes and terms into a manageable number of biological modules for efficient interpretation of gene lists in a network context.", "title": "" }, { "docid": "b4ae619b0b9cc966622feb2dceda0f2e", "text": "A novel pressure sensing circuit for non-invasive RF/microwave blood glucose sensors is presented in this paper. RF sensors are of interest to researchers for measuring blood glucose levels non-invasively. For the measurements, the finger is a popular site that has a good amount of blood supply. When a finger is placed on top of the RF sensor, the electromagnetic fields radiating from the sensor interact with the blood in the finger and the resulting sensor response depends on the permittivity of the blood. The varying glucose level in the blood results in a permittivity change causing a shift in the sensor's response. Therefore, by observing the sensor's frequency response it may be possible to predict the blood glucose level. However, there are two crucial points in taking and subsequently predicting the blood glucose level. These points are; the position of the finger on the sensor and the pressure applied onto the sensor. A variation in the glucose level causes a very small frequency shift. However, finger positioning and applying inconsistent pressure have more pronounced effect on the sensor response. For this reason, it may not be possible to take a correct reading if these effects are not considered carefully. Two novel pressure sensing circuits are proposed and presented in this paper to accurately monitor the pressure applied.", "title": "" }, { "docid": "52318d0743e2a6ec215076efde8cd21c", "text": "We survey the recent wave of extensions to the popular map-reduce systems, including those that have begun to address the implementation of recursive queries using the same computing environment as map-reduce. A central problem is that recursive tasks cannot deliver their output only at the end, which makes recovery from failures much more complicated than in map-reduce and its nonrecursive extensions. We propose several algorithmic ideas for efficient implementation of recursions in the map-reduce environment and discuss several alternatives for supporting recovery from failures without restarting the entire job.", "title": "" }, { "docid": "4520cafacd4794ec942030252652ae7c", "text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852", "title": "" }, { "docid": "323e37bdf09bb65d232eb7e78360e77a", "text": "Breast cancer is a heterogeneous disease that can be subdivided into clinical, histopathological and molecular subtypes (luminal A-like, luminal B-like/HER2-negative, luminal B-like/HER2-positive, HER2-positive, and triple-negative). The study of new molecular factors is essential to obtain further insights into the mechanisms involved in the tumorigenesis of each tumor subtype. RASSF2 is a gene that is hypermethylated in breast cancer and whose clinical value has not been previously studied. The hypermethylation of RASSF1 and RASSF2 genes was analyzed in 198 breast tumors of different subtypes. The effect of the demethylating agent 5-aza-2'-deoxycytidine in the re-expression of these genes was examined in triple-negative (BT-549), HER2 (SK-BR-3), and luminal cells (T-47D). Different patterns of RASSF2 expression for distinct tumor subtypes were detected by immunohistochemistry. RASSF2 hypermethylation was much more frequent in luminal subtypes than in non-luminal tumors (p = 0.001). The re-expression of this gene by lentiviral transduction contributed to the differential cell proliferation and response to antineoplastic drugs observed in luminal compared with triple-negative cell lines. RASSF2 hypermethylation is associated with better prognosis in multivariate statistical analysis (P = 0.039). In conclusion, RASSF2 gene is differently methylated in luminal and non-luminal tumors and is a promising suppressor gene with clinical involvement in breast cancer.", "title": "" }, { "docid": "38808b99d3aa8f08ea9164ee30ed53ca", "text": "This paper presents two novel microstrip-to-slotline baluns. Their design is based on the slotted microstrip cross-junction and its multi-mode equivalent circuit model, i.e., each slotted microstrip supports two modes that have even and odd symmetry. The first balun is a modified version of the conventional 90° via-less microstrip to slotline one with different microstrip and slotline impedances. The 3 dB bandwidth is 2.44 GHz and the minimum insertion loss is 0.5 dB at 2.4 GHz. The second balun is a via-less straight microstrip-to-slotline one that has 3 dB bandwidth of 2.29 GHz and minimum insertion loss of 0.46 dB at 2.4 GHz. Theoretical predictions have been confirmed by EM simulations and measurements.", "title": "" }, { "docid": "54ad1c4a7a6fcb858ad18029fdbbef24", "text": "We can often detect from a person’s utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest—their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.", "title": "" }, { "docid": "a085131dda55d95a52fa0d4653f77410", "text": "Numerous studies show that happy individuals are successful across multiple life domains, including marriage, friendship, income, work performance, and health. The authors suggest a conceptual model to account for these findings, arguing that the happiness-success link exists not only because success makes people happy, but also because positive affect engenders success. Three classes of evidence--crosssectional, longitudinal, and experimental--are documented to test their model. Relevant studies are described and their effect sizes combined meta-analytically. The results reveal that happiness is associated with and precedes numerous successful outcomes, as well as behaviors paralleling success. Furthermore, the evidence suggests that positive affect--the hallmark of well-being--may be the cause of many of the desirable characteristics, resources, and successes correlated with happiness. Limitations, empirical issues, and important future research questions are discussed.", "title": "" }, { "docid": "ae1e110d99dee36a37be3e89b4839bd0", "text": "We describe two techniques for rendering isosurfaces in multiresolution volume data such that the uncertainty (error) in the data is shown in the resulting visualization. In general the visualization of uncertainty in data is difficult, but the nature of isosurface rendering makes it amenable to an effective solution. In addition to showing the error in the data used to generate the isosurface, we also show the value of an additional data variate on the isosurface. The results combine multiresolution and uncertainty visualization techniques into a hybrid approach. Our technique is applied to multiresolution examples from the medical domain.", "title": "" } ]
scidocsrr
a9a67851f9645921c3323aafcd5942e1
Enhanced Security for Cloud Storage using File Encryption
[ { "docid": "88bf67ec7ff0cfa3f1dc6af12140d33b", "text": "Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centers located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. General example of cloud services is Google apps, provided by Google and Microsoft SharePoint. The rapid growth in field of “cloud computing” also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data, and examining the utilization of cloud by the cloud computing vendors. The wide acceptance www has raised security risks along with the uncountable benefits, so is the case with cloud computing. The boom in cloud computing has brought lots of security challenges for the consumers and service providers. How the end users of cloud computing know that their information is not having any availability and security issues? Every one poses, Is their information secure? This study aims to identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors to know about the key security threats associated with cloud computing. Our work will enable researchers and security professionals to know about users and vendors concerns and critical analysis about the different security models and tools proposed.", "title": "" } ]
[ { "docid": "d81b67d0a4129ac2e118c9babb59299e", "text": "Motivation\nA large number of newly sequenced proteins are generated by the next-generation sequencing technologies and the biochemical function assignment of the proteins is an important task. However, biological experiments are too expensive to characterize such a large number of protein sequences, thus protein function prediction is primarily done by computational modeling methods, such as profile Hidden Markov Model (pHMM) and k-mer based methods. Nevertheless, existing methods have some limitations; k-mer based methods are not accurate enough to assign protein functions and pHMM is not fast enough to handle large number of protein sequences from numerous genome projects. Therefore, a more accurate and faster protein function prediction method is needed.\n\n\nResults\nIn this paper, we introduce DeepFam, an alignment-free method that can extract functional information directly from sequences without the need of multiple sequence alignments. In extensive experiments using the Clusters of Orthologous Groups (COGs) and G protein-coupled receptor (GPCR) dataset, DeepFam achieved better performance in terms of accuracy and runtime for predicting functions of proteins compared to the state-of-the-art methods, both alignment-free and alignment-based methods. Additionally, we showed that DeepFam has a power of capturing conserved regions to model protein families. In fact, DeepFam was able to detect conserved regions documented in the Prosite database while predicting functions of proteins. Our deep learning method will be useful in characterizing functions of the ever increasing protein sequences.\n\n\nAvailability and implementation\nCodes are available at https://bhi-kimlab.github.io/DeepFam.", "title": "" }, { "docid": "6ebf60b36d9a13c5ae6ded91ee7d95fe", "text": "In this paper, a novel approach for Kannada, Telugu and Devanagari handwritten numerals recognition based on global and local structural features is proposed. Probabilistic Neural Network (PNN) Classifier is used to classify the Kannada, Telugu and Devanagari numerals separately. Algorithm is validated with Kannada, Telugu and Devanagari numerals dataset by setting various radial values of PNN classifier under different experimental setup. The experimental results obtained are encouraging and comparable with other methods found in literature survey. The novelty of the proposed method is free from thinning and size", "title": "" }, { "docid": "6f2162f883fce56eaa6bd8d0fbcedc0b", "text": "While data from Massive Open Online Courses (MOOCs) offers the potential to gain new insights into the ways in which online communities can contribute to student learning, much of the richness of the data trace is still yet to be mined. In particular, very little work has attempted fine-grained content analyses of the student interactions in MOOCs. Survey research indicates the importance of student goals and intentions in keeping them involved in a MOOC over time. Automated fine-grained content analyses offer the potential to detect and monitor evidence of student engagement and how it relates to other aspects of their behavior. Ultimately these indicators reflect their commitment to remaining in the course. As a methodological contribution, in this paper we investigate using computational linguistic models to measure learner motivation and cognitive engagement from the text of forum posts. We validate our techniques using survival models that evaluate the predictive validity of these variables in connection with attrition over time. We conduct this evaluation in three MOOCs focusing on very different types of learning materials. Prior work demonstrates that participation in the discussion forums at all is a strong indicator of student commitment. Our methodology allows us to differentiate better among these students, and to identify danger signs that a struggling student is in need of support within a population whose interaction with the course offers the opportunity for effective support to be administered. Theoretical and practical implications will be discussed.", "title": "" }, { "docid": "33c497748082b3c62fc1b5e8d5ab9d05", "text": "The prevention and treatment of malaria is heavily dependent on antimalarial drugs. However, beginning with the emergence of chloroquine (CQ)-resistant Plasmodium falciparum parasites 50 years ago, efforts to control the disease have been thwarted by failed or failing drugs. Mutations in the parasite’s ‘chloroquine resistance transporter’ (PfCRT) are the primary cause of CQ resistance. Furthermore, changes in PfCRT (and in several other transport proteins) are associated with decreases or increases in the parasite’s susceptibility to a number of other antimalarial drugs. Here, we review recent advances in our understanding of CQ resistance and discuss these in the broader context of the parasite’s susceptibilities to other quinolines and related drugs. We suggest that PfCRT can be viewed both as a ‘multidrug-resistance carrier’ and as a drug target, and that the quinoline-resistance mechanism is a potential ‘Achilles’ heel’ of the parasite. We examine a number of the antimalarial strategies currently undergoing development that are designed to exploit the resistance mechanism, including relatively simple measures, such as alternative CQ dosages, as well as new drugs that either circumvent the resistance mechanism or target it directly.", "title": "" }, { "docid": "58efd234d4ca9b10ccfc363db4c501d3", "text": "In order to understand the role of the medium osmolality on the metabolism of glumate-producing Corynebacterium glutamicum, effects of saline osmotic upshocks from 0.4 osnol. kg−1 to 2 osmol. kg−1 have been investigated on the growth kinetics and the intracellular content of the bacteria. Addition of a high concentration of NaCl after a few hours of batch culture results in a temporary interruption of the cellular growth. Cell growth resumes after about 1 h but at a specific rate that decreases with increasing medium osmolality. Investigation of the intracellular content showed, during the first 30 min following the shock, a rapid but transient influx of sodium ions. This was followed by a strong accumulation of proline, which rose from 5 to 110 mg/g dry weight at the end of the growth phase. A slight accumulation of intracellular glutamate from 60 to 75 mg/g dry weight was also observed. Accordingly, for Corynebacterium glutamicum an increased osmolality in the glutamate and proline synthesis during the growth phase.", "title": "" }, { "docid": "c9fc426722df72b247093779ad6e2c0e", "text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.", "title": "" }, { "docid": "641811eac0e8a078cf54130c35fd6511", "text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.", "title": "" }, { "docid": "893942f986718d639aa46930124af679", "text": "In this work we consider the problem of controlling a team of microaerial vehicles moving quickly through a three-dimensional environment while maintaining a tight formation. The formation is specified by a shape matrix that prescribes the relative separations and bearings between the robots. Each robot plans its trajectory independently based on its local information of other robot plans and estimates of states of other robots in the team to maintain the desired shape. We explore the interaction between nonlinear decentralized controllers, the fourth-order dynamics of the individual robots, the time delays in the network, and the effects of communication failures on system performance. An experimental evaluation of our approach on a team of quadrotors suggests that suitable performance is maintained as the formation motions become increasingly aggressive and as communication degrades.", "title": "" }, { "docid": "ccddd7df2b5246c44d349bfb0aae499a", "text": "We consider stochastic multi-armed bandit problems with complex actions over a set of basic arms, where the decision maker plays a complex action rather than a basic arm in each round. The reward of the complex action is some function of the basic arms’ rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. Thus, feedback across complex actions may be coupled due to the nature of the reward function. We prove a frequentist regret bound for Thompson sampling in a very general setting involving parameter, action and observation spaces and a likelihood function over them. The bound holds for discretely-supported priors over the parameter space without additional structural properties such as closed-form posteriors, conjugate prior structure or independence across arms. The regret bound scales logarithmically with time but, more importantly, with an improved constant that non-trivially captures the coupling across complex actions due to the structure of the rewards. As applications, we derive improved regret bounds for classes of complex bandit problems involving selecting subsets of arms, including the first nontrivial regret bounds for nonlinear MAX reward feedback from subsets.", "title": "" }, { "docid": "67fc5fffc5f032007ac89dda8d0f877c", "text": "Phishing scam is a well-known fraudulent activity in which victims are tricked to reveal their confidential information especially those related to financial information. There are various phishing schemes such as deceptive phishing, malware based phishing, DNS-based phishing and many more. Therefore in this paper, a systematic review analysis on existing works related with the phishing detection and response techniques together with apoptosis have been further investigated and evaluated. Furthermore, one case study to show the proof of concept how the phishing works is also discussed in this paper. This paper also discusses the challenges and the potential research for future work related with the integration of phishing detection model and response with apoptosis. This research paper also can be used as a reference and guidance for further study on phishing detection and response. Keywords—Phishing; apoptosis; phishing detection; phishing", "title": "" }, { "docid": "4f1070b988605290c1588918a716cef2", "text": "The aim of this paper was to predict the static bending modulus of elasticity (MOES) and modulus of rupture (MOR) of Scots pine (Pinus sylvestris L.) wood using three nondestructive techniques. The mean values of the dynamic modulus of elasticity based on flexural vibration (MOEF), longitudinal vibration (MOELV), and indirect ultrasonic (MOEUS) were 13.8, 22.3, and 30.9 % higher than the static modulus of elasticity (MOES), respectively. The reduction of this difference, taking into account the shear deflection effect in the output values for static bending modulus of elasticity, was also discussed in this study. The three dynamic moduli of elasticity correlated well with the static MOES and MOR; correlation coefficients ranged between 0.68 and 0.96. The correlation coefficients between the dynamic moduli and MOES were higher than those between the dynamic moduli and MOR. The highest correlation between the dynamic moduli and static bending properties was obtained by the flexural vibration technique in comparison with longitudinal vibration and indirect ultrasonic techniques. Results showed that there was no obvious relationship between the density and the acoustic wave velocity that was obtained from the longitudinal vibration and ultrasonic techniques.", "title": "" }, { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "212f128450a141b5b4c83c8c57d14677", "text": "Local Authority road networks commonly include roads with different functional characteristics and a variety of construction types, which require maintenance solutions tailored to their needs. Given this background, on local road network, pavement management is founded on the experience of the agency engineers and is often constrained by low budgets and a variety of environmental and external requirements. This paper forms part of a research work that investigates the use of digital techniques for obtaining field data in order to increase safety and reduce labour cost requirements using a semi-automated distress collection and measurement system. More specifically, a definition of a distress detection procedure is presented which aims at producing a result complying more closely to the distress identification manuals and protocols. The process comprises the following two steps: Automated pavement image collection. Images are collected using the high speed digital acquisition system of the Mobile Laboratory designed and implemented by the Department of Civil and Environmental Engineering of the University of Catania; Distress Detection. By way of the Pavement Distress Analyser (PDA), a specialised software, images are adjusted to eliminate their optical distortion. Cracks, potholes and patching are automatically detected and subsequently classified by means of an operator assisted approach. An intense, experimental field survey has made it possible to establish that the procedure obtains more consistent distress measurements than a manual survey thus increasing its repeatability, reducing costs and increasing safety during the survey. Moreover, the pilot study made it possible to validate results coming from a survey carried out under normal traffic conditions, concluding that it is feasible to integrate the procedure into a roadway pavement management system.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "cc2cd5868ca8b2e9713e5659c61747c5", "text": "Phylogenetic analysis is sometimes regarded as being an intimidating, complex process that requires expertise and years of experience. In fact, it is a fairly straightforward process that can be learned quickly and applied effectively. This Protocol describes the several steps required to produce a phylogenetic tree from molecular data for novices. In the example illustrated here, the program MEGA is used to implement all those steps, thereby eliminating the need to learn several programs, and to deal with multiple file formats from one step to another (Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 28:2731-2739). The first step, identification of a set of homologous sequences and downloading those sequences, is implemented by MEGA's own browser built on top of the Google Chrome toolkit. For the second step, alignment of those sequences, MEGA offers two different algorithms: ClustalW and MUSCLE. For the third step, construction of a phylogenetic tree from the aligned sequences, MEGA offers many different methods. Here we illustrate the maximum likelihood method, beginning with MEGA's Models feature, which permits selecting the most suitable substitution model. Finally, MEGA provides a powerful and flexible interface for the final step, actually drawing the tree for publication. Here a step-by-step protocol is presented in sufficient detail to allow a novice to start with a sequence of interest and to build a publication-quality tree illustrating the evolution of an appropriate set of homologs of that sequence. MEGA is available for use on PCs and Macs from www.megasoftware.net.", "title": "" }, { "docid": "bf7305ceee06b3672825032b78c5e22f", "text": "Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.", "title": "" }, { "docid": "dea235c392f876cae8004166209ace3d", "text": "Vehicular ad hoc networking is an emerging technology for future on-the-road communications. Due to the virtue of vehicle-to-vehicle and vehicle-to-infrastructure communications, vehicular ad hoc networks (VANETs) are expected to enable a plethora of communication-based automotive applications including diverse in-vehicle infotainment applications and road safety services. Even though vehicles are organized mostly in an ad hoc manner in the network topology, directly applying the existing communication approaches designed for traditional mobile ad hoc networks to large-scale VANETs with fast-moving vehicles can be ineffective and inefficient. To achieve success in a vehicular environment, VANET-specific communication solutions are imperative. In this paper, we provide a comprehensive overview of various radio channel access protocols and resource management approaches, and discuss their suitability for infotainment and safety service support in VANETs. Further, we present recent research activities and related projects on vehicular communications. Potential challenges and open research issues are also", "title": "" }, { "docid": "014f1369be6a57fb9f6e2f642b3a4926", "text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.", "title": "" }, { "docid": "a76be3ebe7b169f3669243271d2474a6", "text": "Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into a practical RGBZ video camera, we develop efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras. We first remove typical artefacts in the RGBZ data and then apply an efficient spatiotemporal denoising and upsampling scheme. This allows us to record temporally coherent RGBZ videos at interactive frame rates and to use them to render a variety of effects in unprecedented quality. We show effects such as video relighting, geometry-based abstraction and stylisation, background segmentation and rendering in stereoscopic 3D.", "title": "" }, { "docid": "488b0adfe43fc4dbd9412d57284fc856", "text": "We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.", "title": "" } ]
scidocsrr
f0ee456f13048f1fe2a1314c18aa5e69
A Frequency-Reconfigurable Quasi-Yagi Dipole Antenna
[ { "docid": "6661cc34d65bae4b09d7c236d0f5400a", "text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.", "title": "" } ]
[ { "docid": "d86969ab9471333c6eca4af5092b64b6", "text": "We investigate the problem of sequential linear prediction for real life big data applications. The second order algorithms, i.e., Newton-Raphson Methods, asymptotically achieve the performance of the ”best” possible linear predictor much faster compared to the first order algorithms, e.g., Online Gradient Descent. However, implementation of these methods is not usually feasible in big data applications because of the extremely high computational needs. To this end, we introduce a highly efficient implementation reducing the computational complexity of the second order methods from quadratic to linear scale. We do not rely on any statistical assumptions, hence, lose no information. We demonstrate the computational efficiency of our algorithm on a real life sequential big dataset.", "title": "" }, { "docid": "89652309022bc00c7fd76c4fe1c5d644", "text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.", "title": "" }, { "docid": "f6592e6495527a8e8df9bede4e983e12", "text": "All Internet facing systems and applications carry security risks. Security professionals across the globe generally address these security risks by Vulnerability Assessment and Penetration Testing (VAPT). The VAPT is an offensive way of defending the cyber assets of an organization. It consists of two major parts, namely Vulnerability Assessment (VA) and Penetration Testing (PT). Vulnerability assessment, includes the use of various automated tools and manual testing techniques to determine the security posture of the target system. In this step all the breach points and loopholes are found. These breach points/loopholes if found by an attacker can lead to heavy data loss and fraudulent intrusion activities. In Penetration testing the tester simulates the activities of a malicious attacker who tries to exploit the vulnerabilities of the target system. In this step the identified set of vulnerabilities in VA is used as input vector. This process of VAPT helps in assessing the effectiveness of the security measures that are present on the target system. In this paper we have described the entire process of VAPT, along with all the methodologies, models and standards. A shortlisted set of efficient and popular open source/free tools which are useful in conducting VAPT and the required list of precautions is given. A case study of a VAPT test conducted on a bank system using the shortlisted tools is also discussed.", "title": "" }, { "docid": "acb3aaaf79ebc3fc65724e92e4d076aa", "text": "Lay dispositionism refers to lay people's tendency to use traits as the basic unit of analysis in social perception (L. Ross & R. E. Nisbett, 1991). Five studies explored the relation between the practices indicative of lay dispositionism and people's implicit theories about the nature of personal attributes. As predicted, compared with those who believed that personal attributes are malleable (incremental theorists), those who believed in fixed traits (entity theorists) used traits or trait-relevant information to make stronger future behavioral predictions (Studies 1 and 2) and made stronger trait inferences from behavior (Study 3). Moreover, the relation between implicit theories and lay dispositionism was found in both the United States (a more individualistic culture) and Hong Kong (a more collectivistic culture), suggesting this relation to be generalizable across cultures (Study 4). Finally, an experiment in which implicit theories were manipulated provided preliminary evidence for the possible causal role of implicit theories in lay dispositionism (Study 5).", "title": "" }, { "docid": "5dba3258382d9781287cdcb6b227153c", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "8e6debae3b3d3394e87e671a14f8819e", "text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.", "title": "" }, { "docid": "2b3c9b9f92582af41fcde0186c9bd0f6", "text": "Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.", "title": "" }, { "docid": "a433f47a3c7c06a409a8fc0d98e955be", "text": "The local-dimming backlight has recently been presented for use in LCD TVs. However, the image resolution is low, particularly at weak edges. In this work, a local-dimming backlight is developed to improve the image contrast and reduce power dissipation. The algorithm enhances low-level edge information to improve the perceived image resolution. Based on the algorithm, a 42-in backlight module with white light-emitting diode (LED) devices was driven by a local dimming control core. The block-wise register approach substantially reduced the number of required line-buffers and shortened the latency time. The measurements made in the laboratory indicate that the backlight system reduces power dissipation by an average of 48% and exhibits no visible distortion compared relative to the fixed backlighting system. The system was successfully demonstrated in a 42-in LCD TV, and the contrast ratio was greatly improved by a factor of 100.", "title": "" }, { "docid": "80c1f7e845e21513fc8eaf644b11bdc5", "text": "We describe survey results from a representative sample of 1,075 U. S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and \"Like\"ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.", "title": "" }, { "docid": "37f55e03f4d1ff3b9311e537dc7122b5", "text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.", "title": "" }, { "docid": "0ef77e74b310e7bac2584a3e49d63ce1", "text": "We focus on named entity recognition (NER) for Chinese social media. With massive unlabeled text and quite limited labelled corpus, we propose a semisupervised learning model based on BLSTM neural network. To take advantage of traditional methods in NER such as CRF, we combine transition probability with deep learning in our model. To bridge the gap between label accuracy and F-score of NER, we construct a model which can be directly trained on F-score. When considering the instability of Fscore driven method and meaningful information provided by label accuracy, we propose an integrated method to train on both F-score and label accuracy. Our integrated model yields substantial improvement over previous state-of-the-art result.", "title": "" }, { "docid": "a059fc50eb0e4cab21b04a75221b3160", "text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.", "title": "" }, { "docid": "aa30fc0f921509b1f978aeda1140ffc0", "text": "Arithmetic coding provides an e ective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an e cient implementation that uses table lookup as a fast alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible e ect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding.", "title": "" }, { "docid": "2bd090c2604b94e24e8f9814549c4a95", "text": "Density estimation forms a critical component of many analytics tasks including outlier detection, visualization, and statistical testing. These tasks often seek to classify data into high and low-density regions of a probability distribution. Kernel Density Estimation (KDE) is a powerful technique for computing these densities, offering excellent statistical accuracy but quadratic total runtime. In this paper, we introduce a simple technique for improving the performance of using a KDE to classify points by their density (density classification). Our technique, thresholded kernel density classification (tKDC), applies threshold-based pruning to spatial index traversal to achieve asymptotic speedups over naïve KDE, while maintaining accuracy guarantees. Instead of exactly computing each point's exact density for use in classification, tKDC iteratively computes density bounds and short-circuits density computation as soon as bounds are either higher or lower than the target classification threshold. On a wide range of dataset sizes and dimensions, tKDC demonstrates empirical speedups of up to 1000x over alternatives.", "title": "" }, { "docid": "9978f33847a09c651ccce68c3b88287f", "text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.", "title": "" }, { "docid": "38c922ff8763d1a03b8beb37cc7bd4bb", "text": "As the number of devices connected to the Internet has been exponentially increasing, the degree of threats to those devices and networks has been also increasing. Various network scanning tools, which use fingerprinting techniques, have been developed to make the devices and networks secure by providing the information on its status. However, the tools may be used for malicious purposes. Using network scanning tools, attackers can not only obtain the information of devices such as the name of OS, version, and sessions but also find its vulnerabilities which can be used for further cyber-attacks. In this paper, we compare and analyze the performances of widely used network scanning tools such as Nmap and Nessus. The existing researches on the network scanning tools analyzed a specific scanning tools and they assumed there are only small number of network devices. In this paper, we compare and analyze the performances of several tools in practical network environments with the number of devices more than 40. The results of this paper provide the direction to prevent possible attacks when they are utilized as attack tools as well as the practical understanding of the threats by network scanning tools and fingerprinting techniques.", "title": "" }, { "docid": "7ac1249e901e558443bc8751b11c9427", "text": "Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \\net price\" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.", "title": "" }, { "docid": "ac4683be3ffc119f6eb64c4f295ffe2d", "text": "As data rates in electrical links rise to 56Gb/s, standards are gravitating towards PAM-4 modulation to achieve higher spectral efficiency. Such approaches are not without drawbacks, as PAM-4 signaling results in reduced vertical margins as compared to NRZ. This makes data recovery more susceptible to residual, or uncompensated, intersymbol interference (ISI) when the PAM-4 waveform is sampled by the receiver. To overcome this, existing standards such as OIF CEI 56Gb/s very short reach (VSR) require forward error correction to meet the target link BER of 1E-15. This comes at the expense of higher latency, which is undesirable for chip-to-chip VSR links in compute applications. Therefore, different channel equalization strategies should be considered for PAM-4 electrical links. Employing ½-UI (T/2) tap delays in an FFE extends the filter bandwidth as compared to baud- or T-spaced taps [1], resulting in improved timing margins and lower residual ISI for 56Gb/s PAM-4 data sent across VSR channels. While T/2-spaced FFEs have been reported in optical receivers for dispersion compensation [2], the analog delay techniques used are not conducive to designing dense I/O and cannot support a wide range of data rates. This work demonstrates a 56Gb/s PAM-4 transmitter with a T/2-spaced FFE using high-speed clocking techniques to produce well-controlled tap delays that are data-rate agile. The transmitter also supports T-spaced tap delays, ensuring compatibility with existing standards.", "title": "" }, { "docid": "73e24b2743efb3eead62cb1d8cc4c74d", "text": "Enterprise Resource Planning (ERP) systems have been implemented globally and their implementation has been extensively studied during the past decade. However, many organizations are still struggling to derive benefits from the implemented ERP systems. Therefore, ensuring post-implementation success has become the focus of the current ERP research. This study develops an integrative model to explain the post-implementation success of ERP, based on the Technology–Organization–Environment (TOE) theory. We posit that ERP implementation quality (the technological aspect) consisting of project management and system configuration, organizational readiness (the organizational aspect) consisting of leadership involvement and organizational fit, and external support (the environmental aspect) will positively affect the post-implementation success of ERP. An empirical test was conducted in the Chinese retail industry. The results show that both ERP implementation quality and organizational readiness significantly affect post-implementation success, whereas external support does not. The theoretical and practical implications of the findings are discussed. © 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" } ]
scidocsrr
0beab3e99259c697748456cbf8ea89ec
Depth Estimation from Image Structure
[ { "docid": "9bf157e016f4fc124128a3008dc1c47c", "text": "The appearance of an object is composed of local structure. This local structure can be described and characterized by a vector of local features measured by local operators such as Gaussian derivatives or Gabor filters. This article presents a technique where appearances of objects are represented by the joint statistics of such local neighborhood operators. As such, this represents a new class of appearance based techniques for computer vision. Based on joint statistics, the paper develops techniques for the identification of multiple objects at arbitrary positions and orientations in a cluttered scene. Experiments show that these techniques can identify over 100 objects in the presence of major occlusions. Most remarkably, the techniques have low complexity and therefore run in real-time.", "title": "" } ]
[ { "docid": "e6f506c3c90a15b5e4079ccb75eb3ff0", "text": "Stories of people's everyday experiences have long been the focus of psychology and sociology research, and are increasingly being used in innovative knowledge-based technologies. However, continued research in this area is hindered by the lack of standard corpora of sufficient size and by the costs of creating one from scratch. In this paper, we describe our efforts to develop a standard corpus for researchers in this area by identifying personal stories in the tens of millions of blog posts in the ICWSM 2009 Spinn3r Dataset. Our approach was to employ statistical text classification technology on the content of blog entries, which required the creation of a sufficiently large set of annotated training examples. We describe the development and evaluation of this classification technology and how it was applied to the dataset in order to identify nearly a million", "title": "" }, { "docid": "59b26acc158c728cf485eae27de665f7", "text": "The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations.", "title": "" }, { "docid": "392f7b126431b202d57d6c25c07f7f7c", "text": "Serine racemase (SRace) is an enzyme that catalyzes the conversion of L-serine to pyruvate or D-serine, an endogenous agonist for NMDA receptors. Our previous studies showed that inflammatory stimuli such as Abeta could elevate steady-state mRNA levels for SRace, perhaps leading to inappropriate glutamatergic stimulation under conditions of inflammation. We report here that a proinflammatory stimulus (lipopolysaccharide) elevated the activity of the human SRace promoter, as indicated by expression of a luciferase reporter system transfected into a microglial cell line. This effect corresponded to an elevation of SRace protein levels in microglia, as well. By contrast, dexamethasone inhibited the SRace promoter activity and led to an apparent suppression of SRace steady-state mRNA levels. A potential binding site for NFkappaB was explored, but this sequence played no significant role in SRace promoter activation. Instead, large deletions and site-directed mutagenesis indicated that a DNA element between -1382 and -1373 (relative to the start of translation) was responsible for the activation of the promoter by lipopolysaccharide. This region fits the consensus for an activator protein-1 binding site. Lipopolysaccharide induced an activity capable of binding this DNA element in electrophoretic mobility shift assays. Supershifts with antibodies against c-Fos and JunB identified these as the responsible proteins. An inhibitor of Jun N-terminal kinase blocked SRace promoter activation, further implicating activator protein-1. These data indicate that proinflammatory stimuli utilize a signal transduction pathway culminating in activator protein-1 activation to induce expression of serine racemase.", "title": "" }, { "docid": "333b21433d17a9d271868e203c8a9481", "text": "The aim of stock prediction is to effectively predict future stock market trends (or stock prices), which can lead to increased profit. One major stock analysis method is the use of candlestick charts. However, candlestick chart analysis has usually been based on the utilization of numerical formulas. There has been no work taking advantage of an image processing technique to directly analyze the visual content of the candlestick charts for stock prediction. Therefore, in this study we apply the concept of image retrieval to extract seven different wavelet-based texture features from candlestick charts. Then, similar historical candlestick charts are retrieved based on different texture features related to the query chart, and the “future” stock movements of the retrieved charts are used for stock prediction. To assess the applicability of this approach to stock prediction, two datasets are used, containing 5-year and 10-year training and testing sets, collected from the Dow Jones Industrial Average Index (INDU) for the period between 1990 and 2009. Moreover, two datasets (2010 and 2011) are used to further validate the proposed approach. The experimental results show that visual content extraction and similarity matching of candlestick charts is a new and useful analytical method for stock prediction. More specifically, we found that the extracted feature vectors of 30, 90, and 120, the number of textual features extracted from the candlestick charts in the BMP format, are more suitable for predicting stock movements, while the 90 feature vector offers the best performance for predicting short- and medium-term stock movements. That is, using the 90 feature vector provides the lowest MAPE (3.031%) and Theil’s U (1.988%) rates in the twenty-year dataset, and the best MAPE (2.625%, 2.945%) and Theil’s U (1.622%, 1.972%) rates in the two validation datasets (2010 and 2011).", "title": "" }, { "docid": "4cd36ace8473aeaa61ced34b548c6585", "text": "OBJECTIVE\nSmaller hippocampal volume has been reported only in some but not all studies of unipolar major depressive disorder. Severe stress early in life has also been associated with smaller hippocampal volume and with persistent changes in the hypothalamic-pituitary-adrenal axis. However, prior hippocampal morphometric studies in depressed patients have neither reported nor controlled for a history of early childhood trauma. In this study, the volumes of the hippocampus and of control brain regions were measured in depressed women with and without childhood abuse and in healthy nonabused comparison subjects.\n\n\nMETHOD\nStudy participants were 32 women with current unipolar major depressive disorder-21 with a history of prepubertal physical and/or sexual abuse and 11 without a history of prepubertal abuse-and 14 healthy nonabused female volunteers. The volumes of the whole hippocampus, temporal lobe, and whole brain were measured on coronal MRI scans by a single rater who was blind to the subjects' diagnoses.\n\n\nRESULTS\nThe depressed subjects with childhood abuse had an 18% smaller mean left hippocampal volume than the nonabused depressed subjects and a 15% smaller mean left hippocampal volume than the healthy subjects. Right hippocampal volume was similar across the three groups. The right and left hippocampal volumes in the depressed women without abuse were similar to those in the healthy subjects.\n\n\nCONCLUSIONS\nA smaller hippocampal volume in adult women with major depressive disorder was observed exclusively in those who had a history of severe and prolonged physical and/or sexual abuse in childhood. An unreported history of childhood abuse in depressed subjects could in part explain the inconsistencies in hippocampal volume findings in prior studies in major depressive disorder.", "title": "" }, { "docid": "e7646a79b25b2968c3c5b668d0216aa6", "text": "In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediatelevel descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.", "title": "" }, { "docid": "8999e010ddbc0aa7ef579d8a9e055769", "text": "Convolutional Neural Networks (CNNs) have gained popularity in many computer vision applications such as image classification, face detection, and video analysis, because of their ability to train and classify with high accuracy. Due to multiple convolution and fully-connected layers that are compute-/memory-intensive, it is difficult to perform real-time classification with low power consumption on today?s computing systems. FPGAs have been widely explored as hardware accelerators for CNNs because of their reconfigurability and energy efficiency, as well as fast turn-around-time, especially with high-level synthesis methodologies. Previous FPGA-based CNN accelerators, however, typically implemented generic accelerators agnostic to the CNN configuration, where the reconfigurable capabilities of FPGAs are not fully leveraged to maximize the overall system throughput. In this work, we present a systematic design space exploration methodology to maximize the throughput of an OpenCL-based FPGA accelerator for a given CNN model, considering the FPGA resource constraints such as on-chip memory, registers, computational resources and external memory bandwidth. The proposed methodology is demonstrated by optimizing two representative large-scale CNNs, AlexNet and VGG, on two Altera Stratix-V FPGA platforms, DE5-Net and P395-D8 boards, which have different hardware resources. We achieve a peak performance of 136.5 GOPS for convolution operation, and 117.8 GOPS for the entire VGG network that performs ImageNet classification on P395-D8 board.", "title": "" }, { "docid": "19c3c2ac5e35e8e523d796cef3717d90", "text": "The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.", "title": "" }, { "docid": "3eccedb5a9afc0f7bc8b64c3b5ff5434", "text": "The design of a high impedance, high Q tunable load is presented with operating frequency between 400MHz and close to 6GHz. The bandwidth is made independently tunable of the carrier frequency by using an active inductor resonator with multiple tunable capacitances. The Q factor can be tuned from a value 40 up to 300. The circuit is targeted at 5G wideband applications requiring narrow band filtering where both centre frequency and bandwidth needs to be tunable. The circuit impedance is applied to the output stage of a standard CMOS cascode and results show that high Q factors can be achieved close to 6GHz with 11dB rejection at 20MHz offset from the centre frequency. The circuit architecture takes advantage of currently available low cost, low area tunable capacitors based on micro-electromechanical systems (MEMS) and Barium Strontium Titanate (BST).", "title": "" }, { "docid": "144480a9154226cf4a72f149ff6c9c56", "text": "The availability of medical imaging data from clinical archives, research literature, and clinical manuals, coupled with recent advances in computer vision offer the opportunity for image-based diagnosis, teaching, and biomedical research. However, the content and semantics of an image can vary depending on its modality and as such the identification of image modality is an important preliminary step. The key challenge for automatically classifying the modality of a medical image is due to the visual characteristics of different modalities: some are visually distinct while others may have only subtle differences. This challenge is compounded by variations in the appearance of images based on the diseases depicted and a lack of sufficient training data for some modalities. In this paper, we introduce a new method for classifying medical images that uses an ensemble of different convolutional neural network (CNN) architectures. CNNs are a state-of-the-art image classification technique that learns the optimal image features for a given classification task. We hypothesise that different CNN architectures learn different levels of semantic image representation and thus an ensemble of CNNs will enable higher quality features to be extracted. Our method develops a new feature extractor by fine-tuning CNNs that have been initialized on a large dataset of natural images. The fine-tuning process leverages the generic image features from natural images that are fundamental for all images and optimizes them for the variety of medical imaging modalities. These features are used to train numerous multiclass classifiers whose posterior probabilities are fused to predict the modalities of unseen images. Our experiments on the ImageCLEF 2016 medical image public dataset (30 modalities; 6776 training images, and 4166 test images) show that our ensemble of fine-tuned CNNs achieves a higher accuracy than established CNNs. Our ensemble also achieves a higher accuracy than methods in the literature evaluated on the same benchmark dataset and is only overtaken by those methods that source additional training data.", "title": "" }, { "docid": "d17622889db09b8484d94392cadf1d78", "text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.", "title": "" }, { "docid": "46004ee1f126c8a5b76166c5dc081bc8", "text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.", "title": "" }, { "docid": "d8badd23313c7ea4baa0231ff1b44e32", "text": "Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.", "title": "" }, { "docid": "53575c45a60f93c850206f2a467bc8e8", "text": "We present BPEmb, a collection of pre-trained subword unit embeddings in 275 languages, based on Byte-Pair Encoding (BPE). In an evaluation using fine-grained entity typing as testbed, BPEmb performs competitively, and for some languages better than alternative subword approaches, while requiring vastly fewer resources and no tokenization. BPEmb is available at https://github.com/bheinzerling/bpemb.", "title": "" }, { "docid": "c3e371b0c13f431cbf9b9278a6d40ace", "text": "Until today, most lecturers in universities are found still using the conventional methods of taking students' attendance either by calling out the student names or by passing around an attendance sheet for students to sign confirming their presence. In addition to the time-consuming issue, such method is also at higher risk of having students cheating about their attendance, especially in a large classroom. Therefore a method of taking attendance by employing an application running on the Android platform is proposed in this paper. This application, once installed can be used to download the students list from a designated web server. Based on the downloaded list of students, the device will then act like a scanner to scan each of the student cards one by one to confirm and verify the student's presence. The device's camera will be used as a sensor that will read the barcode printed on the students' cards. The updated attendance list is then uploaded to an online database and can also be saved as a file to be transferred to a PC later on. This system will help to eliminate the current problems, while also promoting a paperless environment at the same time. Since this application can be deployed on lecturers' own existing Android devices, no additional hardware cost is required.", "title": "" }, { "docid": "2c3e6373feb4352a68ec6fd109df66e0", "text": "A broadband transition design between broadside coupled stripline (BCS) and conductor-backed coplanar waveguide (CBCPW) is proposed and studied. The E-field of CBCPW is designed to be gradually changed to that of BCS via a simple linear tapered structure. Two back-to-back transitions are simulated, fabricated and measured. It is reported that maximum insertion loss of 2.3 dB, return loss of higher than 10 dB and group delay flatness of about 0.14 ns are obtained from 50 MHz to 20 GHz.", "title": "" }, { "docid": "7c783834f6ad0151f944766a91f0a67d", "text": "Estradiol is the most potent and ubiquitous member of a class of steroid hormones called estrogens. Fetuses and newborns are exposed to estradiol derived from their mother, their own gonads, and synthesized locally in their brains. Receptors for estradiol are nuclear transcription factors that regulate gene expression but also have actions at the membrane, including activation of signal transduction pathways. The developing brain expresses high levels of receptors for estradiol. The actions of estradiol on developing brain are generally permanent and range from establishment of sex differences to pervasive trophic and neuroprotective effects. Cellular end points mediated by estradiol include the following: 1) apoptosis, with estradiol preventing it in some regions but promoting it in others; 2) synaptogenesis, again estradiol promotes in some regions and inhibits in others; and 3) morphometry of neurons and astrocytes. Estradiol also impacts cellular physiology by modulating calcium handling, immediate-early-gene expression, and kinase activity. The specific mechanisms of estradiol action permanently impacting the brain are regionally specific and often involve neuronal/glial cross-talk. The introduction of endocrine disrupting compounds into the environment that mimic or alter the actions of estradiol has generated considerable concern, and the developing brain is a particularly sensitive target. Prostaglandins, glutamate, GABA, granulin, and focal adhesion kinase are among the signaling molecules co-opted by estradiol to differentiate male from female brains, but much remains to be learned. Only by understanding completely the mechanisms and impact of estradiol action on the developing brain can we also understand when these processes go awry.", "title": "" }, { "docid": "2ae96a524ba3b6c43ea6bfa112f71a30", "text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.", "title": "" }, { "docid": "fd786ae1792e559352c75940d84600af", "text": "In this paper, we obtain an (1 − e−1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n) function value computations. c © 2003 Published by Elsevier B.V.", "title": "" }, { "docid": "fad4ff82e9b11f28a70749d04dfbf8ca", "text": "This material is brought to you by the Journals at AIS Electronic Library (AISeL). It has been accepted for inclusion in Communications of the Association for Information Systems by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. Enterprise architecture (EA) is the definition and representation of a high-level view of an enterprise's business processes and IT systems, their interrelationships, and the extent to which these processes and systems are shared by different parts of the enterprise. EA aims to define a suitable operating platform to support an organisation's future goals and the roadmap for moving towards this vision. Despite significant practitioner interest in the domain, understanding the value of EA remains a challenge. Although many studies make EA benefit claims, the explanations of why and how EA leads to these benefits are fragmented, incomplete, and not grounded in theory. This article aims to address this knowledge gap by focusing on the question: How does EA lead to organisational benefits? Through a careful review of EA literature, the paper consolidates the fragmented knowledge on EA benefits and presents the EA Benefits Model (EABM). The EABM proposes that EA leads to organisational benefits through its impact on four benefit enablers: Organisational Alignment, Information Availability, Resource Portfolio Optimisation, and Resource Complementarity. The article concludes with a discussion of a number of potential avenues for future research, which could build on the findings of this study.", "title": "" } ]
scidocsrr
898db191ed140cce001a89574c1ce0f2
A Case Study for Grain Quality Assurance Tracking based on a Blockchain Business Network
[ { "docid": "ce9487df62f75872d7111a26972feca7", "text": "In this chapter we provide an overview of the concept of blockchain technology and its potential to disrupt the world of banking through facilitating global money remittance, smart contracts, automated banking ledgers and digital assets. In this regard, we first provide a brief overview of the core aspects of this technology, as well as the second-generation contract-based developments. From there we discuss key issues that must be considered in developing such ledger based technologies in a banking context.", "title": "" }, { "docid": "930b48ac25cb646322406c98bf0ae383", "text": "The core technology of Bitcoin, the blockchain, has recently emerged as a disruptive innovation with a wide range of applications, potentially able to redesign our interactions in business, politics and society at large. Although scholarly interest in this subject is growing, a comprehensive analysis of blockchain applications from a political perspective is severely lacking to date. This paper aims to fill this gap and it discusses the key points of blockchain-based decentralized governance, which challenges to varying degrees the traditional mechanisms of State authority, citizenship and democracy. In particular, the paper verifies to which extent blockchain and decentralization platforms can be considered as hyper-political tools, capable to manage social interactions on large scale and dismiss traditional central authorities. The analysis highlights risks related to a dominant position of private powers in distributed ecosystems, which may lead to a general disempowerment of citizens and to the emergence of a stateless global society. While technological utopians urge the demise of any centralized institution, this paper advocates the role of the State as a necessary central point of coordination in society, showing that decentralization through algorithm-based consensus is an organizational theory, not a stand-alone political theory.", "title": "" } ]
[ { "docid": "c81967de1aee76b9937cbdcba3e07996", "text": "The combination of strength (ST) and plyometric training (PT) has been shown to be effective for improving sport-specific performance. However, there is no consensus about the most effective way to combine these methods in the same training session to produce greater improvements in neuromuscular performance of soccer players. Thus, the purpose of this study was to compare the effects of different combinations of ST and PT sequences on strength, jump, speed, and agility capacities of elite young soccer players. Twenty-seven soccer players (age: 18.9 ± 0.6 years) participated in an 8-week resistance training program and were divided into 3 groups: complex training (CP) (ST before PT), traditional training (TD) (PT before ST), and contrast training (CT) (ST and PT performed alternately, set by set). The experimental design took place during the competitive period of the season. The ST composed of half-squat exercises performed at 60-80% of 1 repetition maximum (1RM); the PT composed of drop jump exercises executed in a range from 30 to 45 cm. After the experimental period, the maximum dynamic strength (half-squat 1RM) and vertical jump ability (countermovement jump height) increased similarly and significantly in the CP, TD, and CT (48.6, 46.3, and 53% and 13, 14.2, and 14.7%, respectively). Importantly, whereas the TD group presented a significant decrease in sprinting speed in 10 (7%) and 20 m (6%), the other groups did not show this response. Furthermore, no significant alterations were observed in agility performance in any experimental group. In conclusion, in young soccer players, different combinations and sequences of ST and PT sets result in similar performance improvements in muscle strength and jump ability. However, it is suggested that the use of the CP and CT methods is more indicated to maintain/maximize the sprint performance of these athletes.", "title": "" }, { "docid": "fab72d1223fa94e918952b8715e90d30", "text": "A novel wideband crossed dipole loaded with four parasitic elements is investigated in this letter. The printed crossed dipole is incorporated with a pair of vacant quarter rings to feed the antenna. The antenna is backed by a metallic plate to provide an unidirectional radiation pattern with a wide axial-ratio (AR) bandwidth. To verify the proposed design, a prototype is fabricated and measured. The final design with an overall size of $0.46\\ \\lambda_{0}\\times 0.46\\ \\lambda_{0}\\times 0.23\\ \\lambda_{0} (> \\lambda_{0}$ is the free-space wavelength of circularly polarized center frequency) yields a 10-dB impedance bandwidth of approximately 62.7% and a 3-dB AR bandwidth of approximately 47.2%. In addition, the proposed antenna has a stable broadside gain of 7.9 ± 0.5 dBi within passband.", "title": "" }, { "docid": "558b2036fb15953743f8477fd5e4a138", "text": "According to recent estimates, about 90% of consumer received emails are machine-generated. Such messages include shopping receipts, promotional campaigns, newsletters, booking confirmations, etc. Most such messages are created by populating a fixed template with a small amount of personalized information, such as name, salutation, reservation numbers, dates, etc. Web mail providers (Gmail, Hotmail, Yahoo) are leveraging the structured nature of such emails to extract salient information and use it to improve the user experience: e.g. by automatically entering reservation data into a user calendar, or by sending alerts about upcoming shipments. To facilitate these extraction tasks it is helpful to classify templates according to their category, e.g. restaurant reservations or bill reminders, since each category triggers a particular user experience. Recent research has focused on discovering the causal thread of templates, e.g. inferring that a shopping order is usually followed by a shipping confirmation, an airline booking is followed by a confirmation and then by a “ready to check in” message, etc. Gamzu et al. took this idea one step further by implementing a method to predict the template category of future emails for a given user based on previously received templates. The motivation is that predicting future emails has a wide range of potential applications, including better user experiences (e.g. warning users of items ordered but not shipped), targeted advertising (e.g. users that recently made a flight reservation may be interested in hotel reservations), and spam classification (a message that is part of a legitimate causal thread is unlikely to be spam). The gist of the Gamzu et al. approach is modeling the problem as a Markov chain, where the nodes are templates or temporal events (e.g. the first day of the month). This paper expands on their work by investigating the use of neural networks for predicting the category of emails that will arrive during a fixed-sized time window in the future. We consider two types of neural networks: multilayer perceptrons (MLP), a type of feedforward neural network; and long short-term memory (LSTM), a type of recurrent neural network. For each type of neural network, we explore the effects The work was completed at Google Research. c ©2017 International World Wide Web Conference Committee (IW3C2), published under Creative Commons CC-BY-NC-ND 2.0 License. WWW 2017 Companion,, April 3–7, 2017, Perth, Austraila. ACM 978-1-4503-4914-7/17/04. http://dx.doi.org/10.1145/3041021.3055166 of varying their configuration (e.g. number of layers or number of neurons) and hyper-parameters (e.g. drop-out ratio). We find that the prediction accuracy of neural networks vastly outperforms the Markov chain approach, and that LSTMs perform slightly better than MLPs. We offer some qualitative interpretation of our findings and identify some promising future directions.", "title": "" }, { "docid": "a45b4d0237fdcfedf973ec639b1a1a36", "text": "We investigated the brain systems engaged during propositional speech (PrSp) and two forms of non- propositional speech (NPrSp): counting and reciting overlearned nursery rhymes. Bilateral cerebral and cerebellar regions were involved in the motor act of articulation, irrespective of the type of speech. Three additional, left-lateralized regions, adjacent to the Sylvian sulcus, were activated in common: the most posterior part of the supratemporal plane, the lateral part of the pars opercularis in the posterior inferior frontal gyrus and the anterior insula. Therefore, both NPrSp and PrSp were dependent on the same discrete subregions of the anatomically ill-defined areas of Wernicke and Broca. PrSp was also dependent on a predominantly left-lateralized neural system distributed between multi-modal and amodal regions in posterior inferior parietal, anterolateral and medial temporal and medial prefrontal cortex. The lateral prefrontal and paracingulate cortical activity observed in previous studies of cued word retrieval was not seen with either NPrSp or PrSp, demonstrating that normal brain- language representations cannot be inferred from explicit metalinguistic tasks. The evidence from this study indicates that normal communicative speech is dependent on a number of left hemisphere regions remote from the classic language areas of Wernicke and Broca. Destruction or disconnection of discrete left extrasylvian and perisylvian cortical regions, rather than the total extent of damage to perisylvian cortex, will account for the qualitative and quantitative differences in the impaired speech production observed in aphasic stroke patients.", "title": "" }, { "docid": "50ea6bc9342f9fd1bdf5d46d80dcc775", "text": "Title of Document: BRIDGING THE ATTACHMENT TRANSMISSION GAP WITH MATERNAL MIND-MINDEDNESS AND INFANT TEMPERAMENT Laura Jernigan Sherman, Master of Science, 2009 Directed By: Professor Jude Cassidy, Psychology The goal of this study was to test (a) whether maternal mind-mindedness (MM) mediates the link between maternal attachment (from the Adult Attachment Interview) and infant attachment (in the Strange Situation), and (b) whether infant temperament moderates this model of attachment transmission. Eighty-four racially diverse, economically stressed mothers and their infants were assessed three times: newborn, 5, and 12 months. Despite robust meta-analytic findings supporting attachment concordance for mothers and infants in community samples, this sample was characterized by low attachment concordance. Maternal attachment was unrelated to maternal MM; and, maternal MM was related to infant attachment differences for ambivalent infants only. Infant irritability did not moderate the model. Possible reasons for the discordant attachment patterns and the remaining findings are discussed in relation to theory and previous research. BRIDGING THE ATTACHMENT TRANSMISSION GAP WITH MATERNAL MIND-MINDEDNESS AND INFANT TEMPERAMENT", "title": "" }, { "docid": "5e7d5a86a007efd5d31e386c862fef5c", "text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.", "title": "" }, { "docid": "33296736553ceaab2e113b62c05a803c", "text": "In cases of child abuse, usually, the parents are initial suspects. A common explanation of the parents is that the injuries were caused by a sibling. Child-on-child violence is reported to be very rare in children less than 5 years of age, and thorough investigation by the police, child protective services, and medicolegal examinations are needed to proof or disproof the parents' statement. We report two cases of physical abuse of infants by small children.", "title": "" }, { "docid": "9ad040dc3a1bcd498436772768903525", "text": "Memory B and plasma cells (PCs) are generated in the germinal center (GC). Because follicular helper T cells (TFH cells) have high expression of the immunoinhibitory receptor PD-1, we investigated the role of PD-1 signaling in the humoral response. We found that the PD-1 ligands PD-L1 and PD-L2 were upregulated on GC B cells. Mice deficient in PD-L2 (Pdcd1lg2−/−), PD-L1 and PD-L2 (Cd274−/−Pdcd1lg2−/−) or PD-1 (Pdcd1−/−) had fewer long-lived PCs. The mechanism involved more GC cell death and less TFH cell cytokine production in the absence of PD-1; the effect was selective, as remaining PCs had greater affinity for antigen. PD-1 expression on T cells and PD-L2 expression on B cells controlled TFH cell and PC numbers. Thus, PD-1 regulates selection and survival in the GC, affecting the quantity and quality of long-lived PCs.", "title": "" }, { "docid": "93278184377465ec1b870cd54dc49a93", "text": "We advocate the usage of 3D Zernike invariants as descriptors for 3D shape retrieval. The basis polynomials of this representation facilitate computation of invariants under rotation, translation and scaling. Some theoretical results have already been summarized in the past from the aspect of pattern recognition and shape analysis. We provide practical analysis of these invariants along with algorithms and computational details. Furthermore, we give a detailed discussion on influence of the algorithm parameters like the conversion into a volumetric function, number of utilized coefficients, etc. As is revealed by our study, the 3D Zernike descriptors are natural extensions of recently introduced spherical harmonics based descriptors. We conduct a comparison of 3D Zernike descriptors against these regarding computational aspects and shape retrieval performance using several quality measures and based on experiments on the Princeton Shape Benchmark.", "title": "" }, { "docid": "0a9a94bd83dfbbba2815f8575f1cb8a3", "text": "To create with an autonomous mobile robot a 3D volumetric map of a scene it is necessary to gage several 3D scans and to merge them into one consistent 3D model. This paper provides a new solution to the simultaneous localization and mapping (SLAM) problem with six degrees of freedom. Robot motion on natural surfaces has to cope with yaw, pitch and roll angles, turning pose estimation into a problem in six mathematical dimensions. A fast variant of the Iterative Closest Points algorithm registers the 3D scans in a common coordinate system and relocalizes the robot. Finally, consistent 3D maps are generated using a global relaxation. The algorithms have been tested with 3D scans taken in the Mathies mine, Pittsburgh, PA. Abandoned mines pose significant problems to society, yet a large fraction of them lack accurate 3D maps.", "title": "" }, { "docid": "522e384f4533ca656210561be9afbdab", "text": "Every software program that interacts with a user requires a user interface. Model-View-Controller (MVC) is a common design pattern to integrate a user interface with the application domain logic. MVC separates the representation of the application domain (Model) from the display of the application's state (View) and user interaction control (Controller). However, studying the literature reveals that a variety of other related patterns exists, which we denote with Model-View- (MV) design patterns. This paper discusses existing MV patterns classified in three main families: Model-View-Controller (MVC), Model-View-View Model (MVVM), and Model-View-Presenter (MVP). We take a practitioners' point of view and emphasize the essentials of each family as well as the differences. The study shows that the selection of patterns should take into account the use cases and quality requirements at hand, and chosen technology. We illustrate the selection of a pattern with an example of our practice. The study results aim to bring more clarity in the variety of MV design patterns and help practitioners to make better grounded decisions when selecting patterns.", "title": "" }, { "docid": "d509f695435ba51813164ee98512bf06", "text": "In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines the most essential data mining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend. OntoDM-core is available at http://www.ontodm.com .", "title": "" }, { "docid": "58d8e3bd39fa470d1dfa321aeba53106", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" }, { "docid": "f68e447acd30cab6c2c68affb8c58d0c", "text": "This paper presents a Doppler radar sensor system with camera-aided random body movement cancellation (RBMC) techniques for noncontact vital sign detection. The camera measures the subject's random body motion that is provided for the radar system to perform RBMC and extract the uniform vital sign signals of respiration and heartbeat. Three RBMC strategies are proposed: 1) phase compensation at radar RF front-end, 2) phase compensation for baseband complex signals, and 3) movement cancellation for demodulated signals. Both theoretical analysis and radar simulation have been carried out to validate the proposed RBMC techniques. An experiment was carried out to measure a subject person who was breathing normally but randomly moving his body back and forth. The experimental result reveals that the proposed radar system is effective for RBMC.", "title": "" }, { "docid": "330de15c472bd403f2572f3bdcce2d52", "text": "Programmers repeatedly reuse code snippets. Retyping boilerplate code, and rediscovering how to correctly sequence API calls, programmers waste time. In this paper, we develop techniques that automatically synthesize code snippets upon a programmer’s request. Our approach is based on discovering snippets located in repositories; we mine repositories offline and suggest discovered snippets to programmers. Upon request, our synthesis procedure uses programmer’s current code to find the best fitting snippets, which are then presented to the programmer. The programmer can then either learn the proper API usage or integrate the synthesized snippets directly into her code. We call this approach interactive code snippet synthesis through repository mining. We show that this approach reduces the time spent developing code for 32% in our experiments.", "title": "" }, { "docid": "066d3a381ffdb2492230bee14be56710", "text": "The third generation partnership project released its first 5G security specifications in March 2018. This paper reviews the proposed security architecture and its main requirements and procedures and evaluates them in the context of known and new protocol exploits. Although security has been improved from previous generations, our analysis identifies potentially unrealistic 5G system assumptions and protocol edge cases that can render 5G communication systems vulnerable to adversarial attacks. For example, null encryption and null authentication are still supported and can be used in valid system configurations. With no clear proposal to tackle pre-authentication message-based exploits, mobile devices continue to implicitly trust any serving network, which may or may not enforce a number of optional security features, or which may not be legitimate. Moreover, several critical security and key management functions are considered beyond the scope of the specifications. The comparison with known 4G long-term evolution protocol exploits reveals that the 5G security specifications, as of Release 15, Version 1.0.0, do not fully address the user privacy and network availability challenges.", "title": "" }, { "docid": "a80a539bf4e233e9dbde52426bf890d3", "text": "Innovative technology approaches have been increasingly investigated for the last two decades aiming at human-being long-term monitoring. However, current solutions suffer from critical limitations. In this paper, a complete system for contactless health-monitoring in home environment is presented. For the first time, radar, wireless communications, and data processing techniques are combined, enabling contactless fall detection and tagless localization. Practical limitations are considered and properly dealt with. Experimental tests, conducted with human volunteers in a realistic room setting, demonstrate an adequate detection of the target's absolute distance and a success rate of 94.3% in distinguishing fall events from normal movements. The volunteers were free to move about the whole room with no constraints in their movements.", "title": "" }, { "docid": "90bf404069bd3dfff1e6b108dafffe4c", "text": "To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior.", "title": "" }, { "docid": "8410b8b76ab690ed4389efae15608d13", "text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).", "title": "" }, { "docid": "a5ff7c80c36f354889e3f48e94052195", "text": "A meta-analysis examined emotion recognition within and across cultures. Emotions were universally recognized at better-than-chance levels. Accuracy was higher when emotions were both expressed and recognized by members of the same national, ethnic, or regional group, suggesting an in-group advantage. This advantage was smaller for cultural groups with greater exposure to one another, measured in terms of living in the same nation, physical proximity, and telephone communication. Majority group members were poorer at judging minority group members than the reverse. Cross-cultural accuracy was lower in studies that used a balanced research design, and higher in studies that used imitation rather than posed or spontaneous emotional expressions. Attributes of study design appeared not to moderate the size of the in-group advantage.", "title": "" } ]
scidocsrr
95713ad4aa91dc8f91f691b76c1eb1ca
Practical Dynamic Searchable Encryption with Small Leakage
[ { "docid": "c0a05cad5021b1e779682b50a53f25fd", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" } ]
[ { "docid": "69d42340c09303b69eafb19de7170159", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "92d271da0c5dff6e130e55168c64d2b0", "text": "New therapeutic targets for noncognitive reductions in energy intake, absorption, or storage are crucial given the worldwide epidemic of obesity. The gut microbial community (microbiota) is essential for processing dietary polysaccharides. We found that conventionalization of adult germ-free (GF) C57BL/6 mice with a normal microbiota harvested from the distal intestine (cecum) of conventionally raised animals produces a 60% increase in body fat content and insulin resistance within 14 days despite reduced food intake. Studies of GF and conventionalized mice revealed that the microbiota promotes absorption of monosaccharides from the gut lumen, with resulting induction of de novo hepatic lipogenesis. Fasting-induced adipocyte factor (Fiaf), a member of the angiopoietin-like family of proteins, is selectively suppressed in the intestinal epithelium of normal mice by conventionalization. Analysis of GF and conventionalized, normal and Fiaf knockout mice established that Fiaf is a circulating lipoprotein lipase inhibitor and that its suppression is essential for the microbiota-induced deposition of triglycerides in adipocytes. Studies of Rag1-/- animals indicate that these host responses do not require mature lymphocytes. Our findings suggest that the gut microbiota is an important environmental factor that affects energy harvest from the diet and energy storage in the host. Data deposition: The sequences reported in this paper have been deposited in the GenBank database (accession nos. AY 667702--AY 668946).", "title": "" }, { "docid": "32a2bfb7a26631f435f9cb5d825d8da2", "text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.", "title": "" }, { "docid": "a1d9742feb9f2a5dcf2322b00daf4151", "text": "We tackle the problem of predicting the future popularity level of micro-reviews, focusing on Foursquare tips, whose high degree of informality and briefness offer extra difficulties to the design of effective popularity prediction methods. Such predictions can greatly benefit the future design of content filtering and recommendation methods. Towards our goal, we first propose a rich set of features related to the user who posted the tip, the venue where it was posted, and the tip’s content to capture factors that may impact popularity of a tip. We evaluate different regression and classification based models using this rich set of proposed features as predictors in various scenarios. As fas as we know, this is the first work to investigate the predictability of micro-review popularity (or helpfulness) exploiting spatial, temporal, topical and, social aspects that are rarely exploited conjointly in this domain. © 2015 Published by Elsevier Inc.", "title": "" }, { "docid": "8c07232e73849116c070ffa2367e3c6f", "text": "Childhood apraxia of speech (CAS) is a motor speech disorder characterized by distorted phonemes, segmentation (increased segment and intersegment durations), and impaired production of lexical stress. This study investigated the efficacy of Treatment for Establishing Motor Program Organization (TEMPO) in nine participants (ages 5 to 8) using a delayed treatment group design. Children received four weeks of intervention for four days each week. Experimental probes were administered at baseline and posttreatment—both immediately and one month after treatment—for treated and untreated stimuli. Significant improvements in specific acoustic measures of segmentation and lexical stress were demonstrated following treatment for both the immediate and delayed treatment groups. Treatment effects for all variables were maintained at one-month post-treatment. These results support the demonstrated efficacy of TEMPO in improving the speech of children with CAS.", "title": "" }, { "docid": "45e1a424ad0807ce49cd4e755bdd9351", "text": "Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend towards deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.", "title": "" }, { "docid": "08a1da753730a8c39ef6e98277939f9c", "text": "One of the most important issues in the operation of a photovoltaic (PV) system is extracting maximum power from the PV array, especially in partial shading condition (PSC). Under PSC, P-V characteristic of PV arrays will have multiple peak points, only one of which is global maximum. Conventional maximum power point tracking (MPPT) methods are not able to extract maximum power in this condition. In this paper, a novel two-stage MPPT method is presented to overcome this drawback. In the first stage, a method is proposed to determine the occurrence of PSC, and in the second stage, using a new algorithm that is based on ramp change of the duty cycle and continuous sampling from the P-V characteristic of the array, global maximum power point (MPP) of array is reached. Perturb and observe algorithm is then re-activated to trace small changes of the new MPP. Open-loop operation of the proposed method makes its implementation cheap and simple. The method is robust in the face of changing environmental conditions and array characteristics, and has minimum negative impact on the connected power system. Simulations in Matlab/Simulink and experimental results validate the performance of the proposed methods.", "title": "" }, { "docid": "2d6d33cbbf69cc864c2a65c30f60e5ec", "text": "This article provides a framework for actuaries to think about cyber risk. We propose a differentiated view on cyber versus conventional risk by separating the nature of risk arrival from the target exposed to risk. Our review synthesizes the literature on cyber risk analysis from various disciplines, including computer and network engineering, economics, and actuarial sciences. As a result, we identify possible ways forward to improve rigorous modeling of cyber risk, including its driving factors. This is a prerequisite for establishing a deep and stable market for cyber risk insurance.", "title": "" }, { "docid": "64828addebd6e9b1773e5d8e2e1668af", "text": "Named entity typing is the task of detecting the types of a named entity in context. For instance, given “Eric is giving a presentation”, our goal is to infer that ‘Eric’ is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.", "title": "" }, { "docid": "91e574a20ad41b1725da02d125977fd3", "text": "We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection.", "title": "" }, { "docid": "dc53e2bf9576fd3fb7670b0860eae754", "text": "In the field of ADAS and self-driving car, lane and drivable road detection play an essential role in reliably accomplishing other tasks, such as objects detection. For monocular vision based semantic segmentation of lane and road, we propose a dilated feature pyramid network (FPN) with feature aggregation, called DFFA, where feature aggregation is employed to combine multi-level features enhanced with dilated convolution operations and FPN under the framework of ResNet. Experimental results validate effectiveness and efficiency of the proposed deep learning model for semantic segmentation of lane and drivable road. Our DFFA achieves the best performance both on Lane Estimation Evaluation and Behavior Evaluation tasks in KITTI-ROAD and take the second place on UU ROAD task.", "title": "" }, { "docid": "5aef75aead029333a2e47a5d1ba52f2e", "text": "Although we appreciate Kinney and Atwal’s interest in equitability and maximal information coefficient (MIC), we believe they misrepresent our work. We highlight a few of our main objections below. Regarding our original paper (1), Kinney and Atwal (2) state “MIC is said to satisfy not just the heuristic notion of equitability, but also the mathematical criterion of R equitability,” the latter being their formalization of the heuristic notion that we introduced. This statement is simply false. We were explicit in our paper that our claims regarding MIC’s performance were based on large-scale simulations: “We tested MIC’s equitability through simulations. . ..[These] show that, for a large collection of test functions with varied sample sizes, noise levels, and noise models, MIC roughly equals the coefficient of determination R relative to each respective noiseless function.” Although we mathematically proved several things about MIC, none of our claims imply that it satisfies Kinney and Atwal’s R equitability, which would require that MIC exactly equal R in the infinite data limit. Thus, their proof that no dependence measure can satisfy R equitability, although interesting, does not uncover any error in our work, and their suggestion that it does is a gross misrepresentation. Kinney and Atwal seem ready to toss out equitability as a useful criterion based on their theoretical result. We argue, however, that regardless of whether “perfect” equitability is possible, approximate notions of equitability remain the right goal for many data exploration settings. Just as the theory of NP completeness does not suggest we stop thinking about NP complete problems, but instead that we look for approximations and solutions in restricted cases, an impossibility result about perfect equitability provides focus for further research, but does not mean that useful solutions are unattainable. Similarly, as others have noted (3), Kinney and Atwal’s proof requires a highly permissive noise model, and so the attainability of R equitability under more limited noise models such as those in our work remains an open question. Finally, the authors argue that mutual information is more equitable than MIC. However, they provide as justification only a single noise model, only at limiting sample sizes ðn≥ 5;000Þ. As we’ve shown in followup work (4), which they themselves cite but fail to address, MIC is more equitable than mutual information estimation under many other realistic noise models even at a sample size of 5,000. Kinney and Atwal have stated, “. . .it matters how one defines noise” (5), and a useful statistic must indeed be robust to a wide range of noise models. Equally importantly, we’ve established in both our original and follow-up work that at sample size regimes less than 5,000, MIC is more equitable than mutual information estimates across all noise models tested. MIC’s superior equitability in these settings is not an “artifact” we neglected—as Kinney and Atwal suggest—but rather a weakness of mutual information estimation and an important consideration for practitioners. We expect that the understanding of equitability and MIC will improve over time and that better methods may arise. However, accurate representations of the work thus far will allow researchers in the area to most productively and collectively move forward.", "title": "" }, { "docid": "333fd7802029f38bda35cd2077e7de59", "text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.", "title": "" }, { "docid": "134cde769a3faeeac80746b85313bd0b", "text": "Adrenocortical carcinoma (ACC) in pediatric and adolescent patients is rare, and it is associated with various clinical symptoms. We introduce the case of an 8-year-old boy with ACC who presented with peripheral precocious puberty at his first visit. He displayed penis enlargement with pubic hair and facial acne. His serum adrenal androgen levels were elevated, and abdominal computed tomography revealed a right suprarenal mass. After complete surgical resection, the histological diagnosis was ACC. Two months after surgical removal of the mass, he subsequently developed central precocious puberty. He was treated with a gonadotropin-releasing hormone agonist to delay further pubertal progression. In patients with functioning ACC and surgical removal, clinical follow-up and hormonal marker examination for the secondary effects of excessive hormone secretion may be a useful option at least every 2 or 3 months after surgery.", "title": "" }, { "docid": "69c8cd29d23d64ba36df376cc7a0c174", "text": "In recent years, due to its strong nonlinear mapping and research capacities, the convolutional neural network (CNN) has been widely used in the field of hyperspectral image (HSI) processing. Recently, pixel pair features (PPFs) and spatial PPFs (SPPFs) for HSI classification have served as the new tools for feature extraction. In this paper, on top of PPF, improved subtraction pixel pair features (subtraction-PPFs) are applied for HSI target detection. Unlike original PPF and SPPF, the subtraction-PPF considers target classes to afford the CNN, a target detection function. Using subtraction-PPF, a sufficiently large number of samples are obtained to ensure the excellent performance of the multilayer CNN. For a testing pixel, the input of the trained CNN is the spectral difference between the central pixel and its adjacent pixels. When a test pixel belongs to the target, the output score will be close to the target label. To verify the effectiveness of the proposed method, aircrafts and vehicles are used as targets of interest, while another 27 objects are chosen as background classes (e.g., vegetation and runways). Our experimental results on four images indicate that the proposed detector outperforms classic hyperspectral target detection algorithms.", "title": "" }, { "docid": "55f677c0f55d5ba93507e3b4113c09f3", "text": "In modern power electronic systems, DC-DC converter is one of the main controlled power sources for driving DC systems. But the inherent nonlinear and time-varying characteristics often result in some difficulties mostly related to the control issue. This paper presents a robust nonlinear adaptive controller design with a recursive methodology based on the pulse width modulation (PWM) to drive a DC-DC buck converter. The proposed controller is designed based on the dynamical model of the buck converter where all parameters within the model are assumed as unknown. These unknown parameters are estimated through the adaptation laws and the stability of these laws are ensured by formulating suitable control Lyapunov functions (CLFs) at different stages. The proposed control scheme also provides robustness against external disturbances as these disturbances are considered within the model. One of the main features of the proposed scheme is that it overcomes the over-parameterization problems of unknown parameters which usually appear in some conventional adaptive methods. Finally, the effectiveness of the proposed control scheme is verified through the simulation results and compared to that of an existing adaptive backstepping controller. Simulation results clearly indicate the performance improvement in terms of a faster output voltage tracking response.", "title": "" }, { "docid": "796dc233bbf4e9e063485f26ab7b5b64", "text": "Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study, we have decided to detect anomaly for multi-source VMware-based cloud data center. The framework monitors VMware performance stream data (e.g., CPU load, memory usage, etc.) continuously. It collects these data simultaneously from all the VMwares connected to the network. It notifies the resource manager to reschedule its resources dynamically when it identifies any abnormal behavior of its collected data. We have used Apache Spark, a distributed framework for processing performance stream data and making prediction without any delay. Spark is chosen over a traditional distributed framework (e.g., Hadoop and MapReduce, Mahout, etc.) that is not ideal for stream data processing. We have implemented a flat incremental clustering algorithm to model the benign characteristics in our distributed Spark based framework. We have compared the average processing latency of a tuple during clustering and prediction in Spark with Storm, another distributed framework for stream data processing. We experimentally find that Spark processes a tuple much quicker than Storm on average.", "title": "" }, { "docid": "db3c5c93daf97619ad927532266b3347", "text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.", "title": "" }, { "docid": "51a2d48f43efdd8f190fd2b6c9a68b3c", "text": "Textual passwords are often the only mechanism used to authenticate users of a networked system. Unfortunately, many passwords are easily guessed or cracked. In an attempt to strengthen passwords, some systems instruct users to create mnemonic phrase-based passwords. A mnemonic password is one where a user chooses a memorable phrase and uses a character (often the first letter) to represent each word in the phrase.In this paper, we hypothesize that users will select mnemonic phrases that are commonly available on the Internet, and that it is possible to build a dictionary to crack mnemonic phrase-based passwords. We conduct a survey to gather user-generated passwords. We show the majority of survey respondents based their mnemonic passwords on phrases that can be found on the Internet, and we generate a mnemonic password dictionary as a proof of concept. Our 400,000-entry dictionary cracked 4% of mnemonic passwords; in comparison, a standard dictionary with 1.2 million entries cracked 11% of control passwords. The user-generated mnemonic passwords were also slightly more resistant to brute force attacks than control passwords. These results suggest that mnemonic passwords may be appropriate for some uses today. However, mnemonic passwords could become more vulnerable in the future and should not be treated as a panacea.", "title": "" }, { "docid": "263488a376e419cbbd6cd7c4ecc70a4f", "text": "This paper discusses the ethical issues related to hemicorporectomy surgery, a radical procedure that removes the lower half of the body in order to prolong life. The literature on hemicorporectomy (HC), also called translumbar amputation, has been nearly silent on the ethical considerations relevant to this rare procedure. We explore five aspects of the complex landscape of hemicorporectomy to illustrate the broader ethical questions related to this extraordinary procedure: benefits, risks, informed consent, resource allocation and justice, and loss and the lived body.", "title": "" } ]
scidocsrr
cba477ae81d28d334ed6184c60b345d3
BoostClean: Automated Error Detection and Repair for Machine Learning
[ { "docid": "4fa6343567b96be083e342bf11ee093f", "text": "Data cleaning is frequently an iterative process tailored to the requirements of a specific analysis task. The design and implementation of iterative data cleaning tools presents novel challenges, both technical and organizational, to the community. In this paper, we present results from a user survey (N = 29) of data analysts and infrastructure engineers from industry and academia. We highlight three important themes: (1) the iterative nature of data cleaning, (2) the lack of rigor in evaluating the correctness of data cleaning, and (3) the disconnect between the analysts who query the data and the infrastructure engineers who design the cleaning pipelines. We conclude by presenting a number of recommendations for future work in which we envision an interactive data cleaning system that accounts for the observed challenges.", "title": "" }, { "docid": "4b90fefa981e091ac6a5d2fd83e98b66", "text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.", "title": "" } ]
[ { "docid": "526a687b663b488b5c5cddc1107a0865", "text": "Ricin toxin-binding subunit B (RTB) is a galactosebinding lectin protein. In the present study, we investigated the effects of RTB on inducible nitric oxide (NO) synthase (iNOS), interleukin (IL)-6 and tumor necrosis factor (TNF)-α, as well as the signal transduction mechanisms involved in recombinant RTB-induced macrophage activation. RAW264.7 macrophages were treated with RTB. The results revealed that the mRNA and protein expression of iNOS was increased in the recombinant RTB-treated macrophages. TNF-α production was observed to peak at 20 h, whereas the production of IL-6 peaked at 24 h. In another set of cultures, the cells were co-incubated with RTB and the tyrosine kinase inhibitor, genistein, the phosphatidylinositol 3-kinase (PI3K) inhibitor, LY294002, the p42/44 inhibitor, PD98059, the p38 inhibitor, SB203580, the JNK inhibitor, SP600125, the protein kinase C (PKC) inhibitor, staurosporine, the JAK2 inhibitor, tyrphostin (AG490), or the NOS inhibitor, L-NMMA. The recombinant RTB-induced production of NO, TNF-α and IL-6 was inhibited in the macrophages treated with the pharmacological inhibitors genistein, LY294002, staurosporine, AG490, SB203580 and BAY 11-7082, indicating the possible involvement of protein tyrosine kinases, PI3K, PKC, JAK2, p38 mitogen-activated protein kinase (MAPK) and nuclear factor (NF)-κB in the above processes. A phosphoprotein analysis identified tyrosine phosphorylation targets that were uniquely induced by recombinant RTB and inhibited following treatment with genistein; some of these proteins are associated with the downstream cascades of activated JAK-STAT and NF-κB receptors. Our data may help to identify the most important target molecules for the development of novel drug therapies.", "title": "" }, { "docid": "9af4c955b7c08ca5ffbfabc9681f9525", "text": "The emergence of deep neural networks (DNNs) as a state-of-the-art machine learning technique has enabled a variety of artificial intelligence applications for image recognition, speech recognition and translation, drug discovery, and machine vision. These applications are backed by large DNN models running in serving mode on a cloud computing infrastructure to process client inputs such as images, speech segments, and text segments. Given the compute-intensive nature of large DNN models, a key challenge for DNN serving systems is to minimize the request response latencies. This paper characterizes the behavior of different parallelism techniques for supporting scalable and responsive serving systems for large DNNs. We identify and model two important properties of DNN workloads: 1) homogeneous request service demand and 2) interference among requests running concurrently due to cache/memory contention. These properties motivate the design of serving deep learning systems fast (SERF), a dynamic scheduling framework that is powered by an interference-aware queueing-based analytical model. To minimize response latency for DNN serving, SERF quickly identifies and switches to the optimal parallel configuration of the serving system by using both empirical and analytical methods. Our evaluation of SERF using several well-known benchmarks demonstrates its good latency prediction accuracy, its ability to correctly identify optimal parallel configurations for each benchmark, its ability to adapt to changing load conditions, and its efficiency advantage (by at least three orders of magnitude faster) over exhaustive profiling. We also demonstrate that SERF supports other scheduling objectives and can be extended to any general machine learning serving system with the similar parallelism properties as above.", "title": "" }, { "docid": "380380bd46d854febd0bf12e50ec540b", "text": "STUDY DESIGN\nExperimental laboratory study.\n\n\nOBJECTIVES\nTo quantify and compare electromyographic signal amplitude of the gluteus maximus and gluteus medius muscles during exercises of varying difficulty to determine which exercise most effectively recruits these muscles.\n\n\nBACKGROUND\nGluteal muscle weakness has been proposed to be associated with lower extremity injury. Exercises to strengthen the gluteal muscles are frequently used in rehabilitation and injury prevention programs without scientific evidence regarding their ability to activate the targeted muscles.\n\n\nMETHODS\nSurface electromyography was used to quantify the activity level of the gluteal muscles in 21 healthy, physically active subjects while performing 12 exercises. Repeated-measures analyses of variance were used to compare normalized mean signal amplitude levels, expressed as a percent of a maximum voluntary isometric contraction (MVIC), across exercises.\n\n\nRESULTS\nSignificant differences in signal amplitude among exercises were noted for the gluteus medius (F5,90 = 7.9, P<.0001) and gluteus maximus (F5,95 = 8.1, P<.0001). Gluteus medius activity was significantly greater during side-lying hip abduction (mean +/- SD, 81% +/- 42% MVIC) compared to the 2 types of hip clam (40% +/- 38% MVIC, 38% +/- 29% MVIC), lunges (48% +/- 21% MVIC), and hop (48% +/- 25% MVIC) exercises. The single-limb squat and single-limb deadlift activated the gluteus medius (single-limb squat, 64% +/- 25% MVIC; single-limb deadlift, 59% +/- 25% MVIC) and maximus (single-limb squat, 59% +/- 27% MVIC; single-limb deadlift, 59% +/- 28% MVIC) similarly. The gluteus maximus activation during the single-limb squat and single-limb deadlift was significantly greater than during the lateral band walk (27% +/- 16% MVIC), hip clam (34% +/- 27% MVIC), and hop (forward, 35% +/- 22% MVIC; transverse, 35% +/- 16% MVIC) exercises.\n\n\nCONCLUSION\nThe best exercise for the gluteus medius was side-lying hip abduction, while the single-limb squat and single-limb deadlift exercises led to the greatest activation of the gluteus maximus. These results provide information to the clinician about relative activation of the gluteal muscles during specific therapeutic exercises that can influence exercise progression and prescription. J Orthop Sports Phys Ther 2009;39(7):532-540, Epub 24 February 2009. doi:10.2519/jospt.2009.2796.", "title": "" }, { "docid": "9497731525a996844714d5bdbca6ae03", "text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.", "title": "" }, { "docid": "a09d03e2de70774f443d2da88a32b555", "text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.", "title": "" }, { "docid": "dab84197dec153309bb45368ab730b12", "text": "Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the conditional relations is often a tedious and error-prone task. This article provides an overview of methods used to probe interaction effects and describes a unified collection of freely available online resources that researchers can use to obtain significance tests for simple slopes, compute regions of significance, and obtain confidence bands for simple slopes across the range of the moderator in the MLR, HLM, and LCA contexts. Plotting capabilities are also provided.", "title": "" }, { "docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7", "text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "3ce203d713a0060cc3c1466d62c9bd36", "text": "This paper describes successful applications of discriminative lexicon models to the statistical machine translation (SMT) systems into morphologically complex languages. We extend the previous work on discriminatively trained lexicon models to include more contextual information in making lexical selection decisions by building a single global log-linear model of translation selection. In offline experiments, we show that the use of the expanded contextual information, including morphological and syntactic features, help better predict words in three target languages with complex morphology (Bulgarian, Czech and Korean). We also show that these improved lexical prediction models make a positive impact in the end-to-end SMT scenario from English to these languages.", "title": "" }, { "docid": "39b5095283fd753013c38459a93246fd", "text": "OBJECTIVE\nTo determine whether cannabis use in adolescence predisposes to higher rates of depression and anxiety in young adulthood.\n\n\nDESIGN\nSeven wave cohort study over six years.\n\n\nSETTING\n44 schools in the Australian state of Victoria.\n\n\nPARTICIPANTS\nA statewide secondary school sample of 1601 students aged 14-15 followed for seven years.\n\n\nMAIN OUTCOME MEASURE\nInterview measure of depression and anxiety (revised clinical interview schedule) at wave 7.\n\n\nRESULTS\nSome 60% of participants had used cannabis by the age of 20; 7% were daily users at that point. Daily use in young women was associated with an over fivefold increase in the odds of reporting a state of depression and anxiety after adjustment for intercurrent use of other substances (odds ratio 5.6, 95% confidence interval 2.6 to 12). Weekly or more frequent cannabis use in teenagers predicted an approximately twofold increase in risk for later depression and anxiety (1.9, 1.1 to 3.3) after adjustment for potential baseline confounders. In contrast, depression and anxiety in teenagers predicted neither later weekly nor daily cannabis use.\n\n\nCONCLUSIONS\nFrequent cannabis use in teenage girls predicts later depression and anxiety, with daily users carrying the highest risk. Given recent increasing levels of cannabis use, measures to reduce frequent and heavy recreational use seem warranted.", "title": "" }, { "docid": "c188731b9047bbbe70c35690a5a584ab", "text": "Resource Managers like YARN and Mesos have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault tolerance, task scheduling and coordination) and reimplement common mechanisms (e.g., caching, bulk-data transfers). This article presents REEF, a development framework that provides a control plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource reuse for data caching and state management abstractions that greatly ease the development of elastic data processing pipelines on cloud platforms that support a Resource Manager service. We illustrate the power of REEF by showing applications built atop: a distributed shell application, a machine-learning framework, a distributed in-memory caching system, and a port of the CORFU system. REEF is currently an Apache top-level project that has attracted contributors from several institutions and it is being used to develop several commercial offerings such as the Azure Stream Analytics service.", "title": "" }, { "docid": "d2c36f67971c22595bc483ebb7345404", "text": "Resistive-switching random access memory (RRAM) devices utilizing a crossbar architecture represent a promising alternative for Flash replacement in high-density data storage applications. However, RRAM crossbar arrays require the adoption of diodelike select devices with high on-off -current ratio and with sufficient endurance. To avoid the use of select devices, one should develop passive arrays where the nonlinear characteristic of the RRAM device itself provides self-selection during read and write. This paper discusses the complementary switching (CS) in hafnium oxide RRAM, where the logic bit can be encoded in two high-resistance levels, thus being immune from leakage currents and related sneak-through effects in the crossbar array. The CS physical mechanism is described through simulation results by an ion-migration model for bipolar switching. Results from pulsed-regime characterization are shown, demonstrating that CS can be operated at least in the 10-ns time scale. The minimization of the reset current is finally discussed.", "title": "" }, { "docid": "f571329b93779ae073184d9d63eb0c6c", "text": "Retailers are now the dominant partners in most suply systems and have used their positions to re-engineer operations and partnership s with suppliers and other logistic service providers. No longer are retailers the pass ive recipients of manufacturer allocations, but instead are the active channel con trollers organizing supply in anticipation of, and reaction to consumer demand. T his paper reflects on the ongoing transformation of retail supply chains and logistics. If considers this transformation through an examination of the fashion, grocery and selected other retail supply chains, drawing on practical illustrations. Current and fut ure challenges are then discussed. Introduction Retailers were once the passive recipients of produ cts allocated to stores by manufacturers in the hope of purchase by consumers and replenished o nly at the whim and timing of the manufacturer. Today, retailers are the controllers of product supply in anticipation of, and reaction to, researched, understood, and real-time customer demand. Retailers now control, organise, and manage the supply chain from producti on to consumption. This is the essence of the retail logistics and supply chain transforma tion that has taken place since the latter part of the twentieth century. Retailers have become the channel captains and set the pace in logistics. Having extended their channel control and focused on corporate effi ci ncy and effectiveness, retailers have", "title": "" }, { "docid": "8670b853d3991a8244add8aeb38f8e54", "text": "TOPLESS are tetrameric plant corepressors of the conserved Tup1/Groucho/TLE (transducin-like enhancer of split) family. We show that they interact through their TOPLESS domains (TPDs) with two functionally important ethylene response factor–associated amphiphilic repression (EAR) motifs of the rice strigolactone signaling repressor D53: the universally conserved EAR-3 and the monocot-specific EAR-2. We present the crystal structure of the monocot-specific EAR-2 peptide in complex with the TOPLESS-related protein 2 (TPR2) TPD, in which the EAR-2 motif binds the same TPD groove as jasmonate and auxin signaling repressors but makes additional contacts with a second TPD site to mediate TPD tetramer-tetramer interaction. We validated the functional relevance of the two TPD binding sites in reporter gene assays and in transgenic rice and demonstrate that EAR-2 binding induces TPD oligomerization. Moreover, we demonstrate that the TPD directly binds nucleosomes and the tails of histones H3 and H4. Higher-order assembly of TPD complexes induced by EAR-2 binding markedly stabilizes the nucleosome-TPD interaction. These results establish a new TPD-repressor binding mode that promotes TPD oligomerization and TPD-nucleosome interaction, thus illustrating the initial assembly of a repressor-corepressor-nucleosome complex.", "title": "" }, { "docid": "66ba9c32c29e905a018aab3a25733fd1", "text": "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societal goals for equality of representation in this space.", "title": "" }, { "docid": "f7a1eaa86a81b104a9ae62dc87c495aa", "text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.", "title": "" }, { "docid": "b19fb7f7471d3565e79dbaab3572bb4d", "text": "Self-enucleation or oedipism is a specific manifestation of psychiatric illness distinct from the milder forms of self-inflicted ocular injury. In this article, we discuss the previously unreported medical complication of subarachnoid hemorrhage accompanying self-enucleation. The diagnosis was suspected from the patient's history and was confirmed by computed tomographic scan of the head. This complication may be easily missed in the overtly psychotic patient. Specific steps in the medical management of self-enucleation are discussed, and medical complications of self-enucleation are reviewed.", "title": "" }, { "docid": "18da4e2cd0745e400002d24117834fd8", "text": "This paper examines the possible influence of podcasting on the traditional lecture in higher education. Firstly, it explores some of the benefits and limitations of the lecture as one of the dominant forms of teaching in higher education. The review then moves to explore the emergence of podcasting in education and the purpose of its use, before examining recent relevant literature about podcasting for supporting, enhancing, and indeed replacing the traditional lecture. The review identifies three broad types of use of podcasting: substitutional, supplementary and creative use. Podcasting appears to be most commonly used to provide recordings of past lectures to students for the purposes of review and revision (substitutional use). The second most common use was in providing additional material, often in the form of study guides and summary notes, to broaden and deepen students’ understanding (supplementary use). The third and least common use reported in the literature involved the creation of student generated podcasts (creative use). The review examines three key questions: What are the educational uses of podcasting in teaching and learning in higher education? Can podcasting facilitate more flexible and mobile learning? In what ways will podcasting influence the traditional lecture? These questions are discussed in the final section of the paper, with reference to future policies and practices.", "title": "" }, { "docid": "007634725171f426691246c419f067ad", "text": "A flexible multidelay block frequency domain (MDF) adaptive filter is presented. The distinct feature of the MDF adaptive filter is to allow one to choose the size of an FFT tailored to the efficient use of a hardware, rather than the requirement of a specific application. The MDF adaptive filter also requires less memory and so reduces the requirement and cost of a hardware. In performance, the MDF adaptive filter introduces smaller block delay and is faster,.ideal for a time-varying system such as modeling an acoustic path in a teleconference room. This is achieved by using smaller block size, updating the weight vectors more often, and reducing the total execution time of the adaptive process. The MDF adaptive filter compares favorably to other frequency domain adaptive filters when its adaptation speed and misadjustment are tested in computer simulations.", "title": "" } ]
scidocsrr
b703ca9cf76a998b7004dfc19c16021f
0.35mm pitch wafer level package board level reliability: Studying effect of ball de-population with varying ball size
[ { "docid": "8eace30c00d9b118635dc8a2e383f36b", "text": "Wafer Level Packaging (WLP) has the highest potential for future single chip packages because the WLP is intrinsically a chip size package. The package is completed directly on the wafer then singulated by dicing for the assembly. All packaging and testing operations of the dice are replaced by whole wafer fabrication and wafer level testing. Therefore, it becomes more cost-effective with decreasing die size or increasing wafer size. However, due to the intrinsic mismatch of the coefficient of thermal expansion (CTE) between silicon chip and plastic PCB material, solder ball reliability subject to temperature cycling becomes the weakest point of the technology. In this paper some fundamental principles in designing WLP structure to achieve the robust reliability are demonstrated through a comprehensive study of a variety of WLP technologies. The first principle is the 'structural flexibility' principle. The more flexible a WLP structure is, the less the stresses that are applied on the solder balls will be. Ball on polymer WLP, Cu post WLP, polymer core solder balls are such examples to achieve better flexibility of overall WLP structure. The second principle is the 'local enhancement' at the interface region of solder balls where fatigue failures occur. Polymer collar WLP, and increasing solder opening size are examples to reduce the local stress level. In this paper, the reliability improvements are discussed through various existing and tested WLP technologies at silicon level and ball level, respectively. The fan-out wafer level packaging is introduced, which is expected to extend the standard WLP to the next stage with unlimited potential applications in future.", "title": "" }, { "docid": "9e91f7e57e074ec49879598c13035d70", "text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.", "title": "" } ]
[ { "docid": "8d5222e552ffcd47595c5ec6d3d1f0fe", "text": "The main purpose of this paper is to highlight the features of Artificial Intelligence (AI), how it was developed, and some of its main applications. John McCarthy, one of the founders of artificial intelligence research, once defined the field as “getting a computer to do things which, when done by people, are said to involve intelligence.” The point of the definition was that he felt perfectly comfortable about carrying on his research without first having to defend any particular philosophical view of what the word “intelligence” means. The beginnings of artificial intelligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems.", "title": "" }, { "docid": "f1e5f8ab0b2ce32553dd5e08f1113b36", "text": "We examined the hypothesis that an excess accumulation of intramuscular lipid (IMCL) is associated with insulin resistance and that this may be mediated by the oxidative capacity of muscle. Nine sedentary lean (L) and 11 obese (O) subjects, 8 obese subjects with type 2 diabetes mellitus (D), and 9 lean, exercise-trained (T) subjects volunteered for this study. Insulin sensitivity (M) determined during a hyperinsulinemic (40 mU x m(-2)min(-1)) euglycemic clamp was greater (P < 0.01) in L and T, compared with O and D (9.45 +/- 0.59 and 10.26 +/- 0.78 vs. 5.51 +/- 0.61 and 1.15 +/- 0.83 mg x min(-1)kg fat free mass(-1), respectively). IMCL in percutaneous vastus lateralis biopsy specimens by quantitative image analysis of Oil Red O staining was approximately 2-fold higher in D than in L (3.04 +/- 0.39 vs. 1.40 +/- 0.28% area as lipid; P < 0.01). IMCL was also higher in T (2.36 +/- 0.37), compared with L (P < 0.01). The oxidative capacity of muscle determined with succinate dehydrogenase staining of muscle fibers was higher in T, compared with L, O, and D (50.0 +/- 4.4, 36.1 +/- 4.4, 29.7 +/- 3.8, and 33.4 +/- 4.7 optical density units, respectively; P < 0.01). IMCL was negatively associated with M (r = -0.57, P < 0.05) when endurance-trained subjects were excluded from the analysis, and this association was independent of body mass index. However, the relationship between IMCL and M was not significant when trained individuals were included. There was a positive association between the oxidative capacity and M among nondiabetics (r = 0.37, P < 0.05). In summary, skeletal muscle of trained endurance athletes is markedly insulin sensitive and has a high oxidative capacity, despite having an elevated lipid content. In conclusion, the capacity for lipid oxidation may be an important mediator of the association between excess muscle lipid accumulation and insulin resistance.", "title": "" }, { "docid": "444a9192398374227c9cd93ec253f139", "text": "185 Abstract— The concept of sequence Data Mining was first introduced by Rakesh Agrawal and Ramakrishnan Srikant in the year 1995. The problem was first introduced in the context of market analysis. It aimed to retrieve frequent patterns in the sequences of products purchased by customers through time ordered transactions. Later on its application was extended to complex applications like telecommunication, network detection, DNA research, etc. Several algorithms were proposed. The very first was Apriori algorithm, which was put forward by the founders themselves. Later more scalable algorithms for complex applications were developed. E.g. GSP, Spade, PrefixSpan etc. The area underwent considerable advancements since its introduction in a short span. In this paper, a systematic survey of the sequential pattern mining algorithms is performed. This paper investigates these algorithms by classifying study of sequential pattern-mining algorithms into two broad categories. First, on the basis of algorithms which are designed to increase efficiency of mining and second, on the basis of various extensions of sequential pattern mining designed for certain application. At the end, comparative analysis is done on the basis of important key features supported by various algorithms and current research challenges are discussed in this field of data mining.", "title": "" }, { "docid": "76d514ee806b154b4fef2fe2c63c8b27", "text": "Attacks on systems and organisations increasingly exploit human actors, for example through social engineering, complicating their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brainstorming of experts. In this work we formalize attack tree generation including human factors; based on recent advances in system models we develop a technique to identify possible attacks analytically, including technical and human factors. Our systematic attack generation is based on invalidating policies in the system model by identifying possible sequences of actions that lead to an attack. The generated attacks are precise enough to illustrate the threat, and they are general enough to hide the details of individual steps.", "title": "" }, { "docid": "e6d99d126e42697da3f37dd26ac02524", "text": "The authors developed, tested, and replicated a model in which safety-specific transformational leadership predicted occupational injuries in 2 separate studies. Data from 174 restaurant workers (M age = 26.75 years, range = 15-64) were analyzed using structural equation modeling (LISREL 8; K. G. Jöreskog & D. Sörbom, 1993) and provided strong support for a model whereby safety-specific transformational leadership predicted occupational injuries through the effects of perceived safety climate, safety consciousness, and safety-related events. Study 2 replicated and extended this model with data from 164 young workers from diverse jobs (M age = 19.54 years, range = 14-24). Safety-specific transformational leadership and role overload were related to occupational injuries through the effects of perceived safety climate, safety consciousness, and safety-related events.", "title": "" }, { "docid": "8ed8886668eef29d9574be5f6f058959", "text": "We present a fully trainable solution for binarization of degraded document images using extremely randomized trees. Unlike previous attempts that often use simple features, our method encodes all heuristics about whether or not a pixel is foreground text into a high-dimensional feature vector and learns a more complicated decision function. We introduce two novel features, the Logarithm Intensity Percentile (LIP) and the Relative Darkness Index (RDI), and combine them with low level features, and reformulated features from existing binarization methods. Experimental results show that using small sample size (about 1.5% of all available training data), we can achieve a binarization performance comparable to manually-tuned, state-of-the-art methods. Additionally, the trained document binarization classifier shows good generalization capabilities on out-of-domain data.", "title": "" }, { "docid": "52f414bea50c9a7f78fcbf198b6caf4c", "text": "Searchable encryption (SE) allows a client to outsource a dataset to an untrusted server while enabling the server to answer keyword queries in a private manner. SE can be used as a building block to support more expressive private queries such as range/point and boolean queries, while providing formal security guarantees. To scale SE to big data using external memory, new schemes with small locality have been proposed, where locality is defined as the number of non-continuous reads that the server makes for each query. Previous space-efficient SE schemes achieve optimal locality by increasing the read efficiency-the number of additional memory locations (false positives) that the server reads per result item. This can hurt practical performance.\n In this work, we design, formally prove secure, and evaluate the first SE scheme with tunable locality and linear space. Our first scheme has optimal locality and outperforms existing approaches (that have a slightly different leakage profile) by up to 2.5 orders of magnitude in terms of read efficiency, for all practical database sizes. Another version of our construction with the same leakage as previous works can be tuned to have bounded locality, optimal read efficiency and up to 60x more efficient end-to-end search time. We demonstrate that our schemes work fast in in-memory as well, leading to search time savings of up to 1 order of magnitude when compared to the most practical in-memory SE schemes. Finally, our construction can be tuned to achieve trade-offs between space, read efficiency, locality, parallelism and communication overhead.", "title": "" }, { "docid": "64f15815e4c1c94c3dfd448dec865b85", "text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.", "title": "" }, { "docid": "48ba3cad9e20162b6dcbb28ead47d997", "text": "This paper compares the accuracy of several variations of the B LEU algorithm when applied to automatically evaluating student essays. The different configurations include closed-class word removal, stemming, two baseline wordsense disambiguation procedures, and translating the texts into a simple semantic representation. We also prove empirically that the accuracy is kept when the student answers are translated automatically. Although none of the representations clearly outperform the others, some conclusions are drawn from the results.", "title": "" }, { "docid": "7c36d7f2a9604470e0e97bd2425bbf0c", "text": "Gamification, the use of game mechanics in non-gaming applications, has been applied to various systems to encourage desired user behaviors. In this paper, we examine patterns of user activity in an enterprise social network service after the removal of a points-based incentive system. Our results reveal that the removal of the incentive scheme did reduce overall participation via contribution within the SNS. We also describe the strategies by point leaders and observe that users geographically distant from headquarters tended to comment on profiles outside of their home country. Finally, we describe the implications of the removal of extrinsic rewards, such as points and badges, on social software systems, particularly those deployed within an enterprise.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "9688efb8845895d49029c07d397a336b", "text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.", "title": "" }, { "docid": "5ba72505e19ded19685f43559868bfdf", "text": "In this paper, we present an optimally-modi#ed log-spectral amplitude (OM-LSA) speech estimator and a minima controlled recursive averaging (MCRA) noise estimation approach for robust speech enhancement. The spectral gain function, which minimizes the mean-square error of the log-spectra, is obtained as a weighted geometric mean of the hypothetical gains associated with the speech presence uncertainty. The noise estimate is given by averaging past spectral power values, using a smoothing parameter that is adjusted by the speech presence probability in subbands. We introduce two distinct speech presence probability functions, one for estimating the speech and one for controlling the adaptation of the noise spectrum. The former is based on the time–frequency distribution of the a priori signal-to-noise ratio. The latter is determined by the ratio between the local energy of the noisy signal and its minimum within a speci6ed time window. Objective and subjective evaluation under various environmental conditions con6rm the superiority of the OM-LSA and MCRA estimators. Excellent noise suppression is achieved, while retaining weak speech components and avoiding the musical residual noise phenomena. ? 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "7e047b7c0a0ded44106ce6b50726d092", "text": "Skeleton-based action recognition task is entangled with complex spatio-temporal variations of skeleton joints, and remains challenging for Recurrent Neural Networks (RNNs). In this work, we propose a temporal-then-spatial recalibration scheme to alleviate such complex variations, resulting in an end-to-end Memory Attention Networks (MANs) which consist of a Temporal Attention Recalibration Module (TARM) and a Spatio-Temporal Convolution Module (STCM). Specifically, the TARM is deployed in a residual learning module that employs a novel attention learning network to recalibrate the temporal attention of frames in a skeleton sequence. The STCM treats the attention calibrated skeleton joint sequences as images and leverages the Convolution Neural Networks (CNNs) to further model the spatial and temporal information of skeleton data. These two modules (TARM and STCM) seamlessly form a single network architecture that can be trained in an end-to-end fashion. MANs significantly boost the performance of skeleton-based action recognition and achieve the best results on four challenging benchmark datasets: NTU RGB+D, HDM05, SYSU-3D and UT-Kinect.1", "title": "" }, { "docid": "311d186966b7d697731e4c2450289418", "text": "PURPOSE OF REVIEW\nThe goal of this paper is to review current literature on nutritional ketosis within the context of weight management and metabolic syndrome, namely, insulin resistance, lipid profile, cardiovascular disease risk, and development of non-alcoholic fatty liver disease. We provide background on the mechanism of ketogenesis and describe nutritional ketosis.\n\n\nRECENT FINDINGS\nNutritional ketosis has been found to improve metabolic and inflammatory markers, including lipids, HbA1c, high-sensitivity CRP, fasting insulin and glucose levels, and aid in weight management. We discuss these findings and elaborate on potential mechanisms of ketones for promoting weight loss, decreasing hunger, and increasing satiety. Humans have evolved with the capacity for metabolic flexibility and the ability to use ketones for fuel. During states of low dietary carbohydrate intake, insulin levels remain low and ketogenesis takes place. These conditions promote breakdown of excess fat stores, sparing of lean muscle, and improvement in insulin sensitivity.", "title": "" }, { "docid": "6a40a7cf6690ac39d8b73048dad51e97", "text": "Power-flow modeling of a unified power-flow controller (UPFC) increases the complexities of the computer program codes for a Newton-Raphson load-flow (NRLF) analysis. This is due to the fact that modifications of the existing codes are needed for computing power injections, and the elements of the Jacobian matrix to take into account the contributions of the series and shunt voltage sources of the UPFC. Additionally, new codes for computing the UPFC real-power injection terms as well as the associated Jacobian matrix need to be developed. To reduce this complexity of programming codes, in this paper, an indirect yet exact UPFC model is proposed. In the proposed model, an existing power system installed with UPFC is transformed into an augmented equivalent network without any UPFC. Due to the absence of any UPFC, the augmented network can easily be solved by reusing the existing NRLF computer codes to obtain the solution of the original network containing UPFC(s). As a result, substantial reduction in the complexities of the computer program codes takes place. Additionally, the proposed model can also account for various practical device limit constraints of the UPFC.", "title": "" }, { "docid": "b92484f67bf2d3f71d51aee9fb7abc86", "text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.", "title": "" }, { "docid": "4bc73a7e6a6975ba77349cac62a96c18", "text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.", "title": "" }, { "docid": "61304d369ea790d80b24259336d6974c", "text": "After searching for the keywords “information privacy” in ABI/Informs focusing on scholarly articles, we obtained a listing of 340 papers. We first eliminated papers that were anonymous, table of contents, interviews with experts, or short opinion pieces. We also removed articles not related to our focus on information privacy research in IS literature. A total of 218 articles were removed as explained in Table A1.", "title": "" }, { "docid": "3c7d25c85b837a3337c93ca2e1e54af4", "text": "BACKGROUND\nThe treatment of acne scars with fractional CO(2) lasers is gaining increasing impact, but has so far not been compared side-by-side to untreated control skin.\n\n\nOBJECTIVE\nIn a randomized controlled study to examine efficacy and adverse effects of fractional CO(2) laser resurfacing for atrophic acne scars compared to no treatment.\n\n\nMETHODS\nPatients (n = 13) with atrophic acne scars in two intra-individual areas of similar sizes and appearances were randomized to (i) three monthly fractional CO(2) laser treatments (MedArt 610; 12-14 W, 48-56 mJ/pulse, 13% density) and (ii) no treatment. Blinded on-site evaluations were performed by three physicians on 10-point scales. Endpoints were change in scar texture and atrophy, adverse effects, and patient satisfaction.\n\n\nRESULTS\nPreoperatively, acne scars appeared with moderate to severe uneven texture (6.15 ± 1.23) and atrophy (5.72 ± 1.45) in both interventional and non-interventional control sites, P = 1. Postoperatively, lower scores of scar texture and atrophy were obtained at 1 month (scar texture 4.31 ± 1.33, P < 0.0001; atrophy 4.08 ± 1.38, P < 0.0001), at 3 months (scar texture 4.26 ± 1.97, P < 0.0001; atrophy 3.97 ± 2.08, P < 0.0001), and at 6 months (scar texture 3.89 ± 1.7, P < 0.0001; atrophy 3.56 ± 1.76, P < 0.0001). Patients were satisfied with treatments and evaluated scar texture to be mild or moderately improved. Adverse effects were minor.\n\n\nCONCLUSIONS\nIn this single-blinded randomized controlled trial we demonstrated that moderate to severe atrophic acne scars can be safely improved by ablative fractional CO(2) laser resurfacing. The use of higher energy levels might have improved the results and possibly also induced significant adverse effects.", "title": "" } ]
scidocsrr
5a6887e33ec830afafeae7b655b9823d
A Study on Outlier Detection for Temporal Data
[ { "docid": "f598677e19789c92c31936440e709c4d", "text": "Temporal datasets, in which data evolves continuously, exist in a wide variety of applications, and identifying anomalous or outlying objects from temporal datasets is an important and challenging task. Different from traditional outlier detection, which detects objects that have quite different behavior compared with the other objects, temporal outlier detection tries to identify objects that have different evolutionary behavior compared with other objects. Usually objects form multiple communities, and most of the objects belonging to the same community follow similar patterns of evolution. However, there are some objects which evolve in a very different way relative to other community members, and we define such objects as evolutionary community outliers. This definition represents a novel type of outliers considering both temporal dimension and community patterns. We investigate the problem of identifying evolutionary community outliers given the discovered communities from two snapshots of an evolving dataset. To tackle the challenges of community evolution and outlier detection, we propose an integrated optimization framework which conducts outlier-aware community matching across snapshots and identification of evolutionary outliers in a tightly coupled way. A coordinate descent algorithm is proposed to improve community matching and outlier detection performance iteratively. Experimental results on both synthetic and real datasets show that the proposed approach is highly effective in discovering interesting evolutionary community outliers.", "title": "" }, { "docid": "90564374d0c72816f930bc629f97d277", "text": "Outlier detection is an integral component of statistical modelling and estimation. For highdimensional data, classical methods based on the Mahalanobis distance are usually not applicable. We propose an outlier detection procedure that replaces the classical minimum covariance determinant estimator with a high-breakdown minimum diagonal product estimator. The cut-off value is obtained from the asymptotic distribution of the distance, which enables us to control the Type I error and deliver robust outlier detection. Simulation studies show that the proposed method behaves well for high-dimensional data.", "title": "" }, { "docid": "a0ebe19188abab323122a5effc3c4173", "text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.", "title": "" } ]
[ { "docid": "b91b42da0e7ffe838bf9d7ab0bd54bea", "text": "When creating line drawings, artists frequently depict intended curves using multiple, tightly clustered, or overdrawn, strokes. Given such sketches, human observers can readily envision these intended, aggregate, curves, and mentally assemble the artist's envisioned 2D imagery. Algorithmic stroke consolidation---replacement of overdrawn stroke clusters by corresponding aggregate curves---can benefit a range of sketch processing and sketch-based modeling applications which are designed to operate on consolidated, intended curves. We propose StrokeAggregator, a novel stroke consolidation method that significantly improves on the state of the art, and produces aggregate curve drawings validated to be consistent with viewer expectations. Our framework clusters strokes into groups that jointly define intended aggregate curves by leveraging principles derived from human perception research and observation of artistic practices. We employ these principles within a coarse-to-fine clustering method that starts with an initial clustering based on pairwise stroke compatibility analysis, and then refines it by analyzing interactions both within and in-between clusters of strokes. We facilitate this analysis by computing a common 1D parameterization for groups of strokes via common aggregate curve fitting. We demonstrate our method on a large range of line drawings, and validate its ability to generate consolidated drawings that are consistent with viewer perception via qualitative user evaluation, and comparisons to manually consolidated drawings and algorithmic alternatives.", "title": "" }, { "docid": "ebe28ced7ecfccd52aa01b7740a617d3", "text": "Converting handwritten formulas to LaTex is a challenging machine learning problem. An essential step in the recognition of mathematical formulas is the symbol recognition. In this paper we show that pyramids of oriented gradients (PHOG) are effective features for recognizing mathematical symbols. Our best results are obtained using PHOG features along with a one-againstone SVM classifier. We train our classifier using images extracted from XY coordinates of online data from the CHROHME dataset, which contains 22000 character samples. We limit our analysis to 59 characters. The classifier achieves a 96% generalization accuracy on these characters and makes reasonable mistakes. We also demonstrate that our classifier is able to generalize gracefully to phone images of mathematical symbols written by a new user. On a small experiment performed on images of 75 handwritten symbols, the symbol recognition rates is 92 %. The code is available at: https://github.com/nicodjimenez/", "title": "" }, { "docid": "c478773f832e84e560b57a5ed74cbc76", "text": "Structural variants are implicated in numerous diseases and make up the majority of varying nucleotides among human genomes. Here we describe an integrated set of eight structural variant classes comprising both balanced and unbalanced variants, which we constructed using short-read DNA sequencing data and statistically phased onto haplotype blocks in 26 human populations. Analysing this set, we identify numerous gene-intersecting structural variants exhibiting population stratification and describe naturally occurring homozygous gene knockouts that suggest the dispensability of a variety of human genes. We demonstrate that structural variants are enriched on haplotypes identified by genome-wide association studies and exhibit enrichment for expression quantitative trait loci. Additionally, we uncover appreciable levels of structural variant complexity at different scales, including genic loci subject to clusters of repeated rearrangement and complex structural variants with multiple breakpoints likely to have formed through individual mutational events. Our catalogue will enhance future studies into structural variant demography, functional impact and disease association.", "title": "" }, { "docid": "b43178b53f927eb90473e2850f948cb6", "text": "We study the problem of learning a navigation policy for a robot to actively search for an object of interest in an indoor environment solely from its visual inputs. While scene-driven visual navigation has been widely studied, prior efforts on learning navigation policies for robots to find objects are limited. The problem is often more challenging than target scene finding as the target objects can be very small in the view and can be in an arbitrary pose. We approach the problem from an active perceiver perspective, and propose a novel framework that integrates a deep neural network based object recognition module and a deep reinforcement learning based action prediction mechanism. To validate our method, we conduct experiments on both a simulation dataset (AI2-THOR)and a real-world environment with a physical robot. We further propose a new decaying reward function to learn the control policy specific to the object searching task. Experimental results validate the efficacy of our method, which outperforms competing methods in both average trajectory length and success rate.", "title": "" }, { "docid": "ec1e79530ef20e2d8610475d07ee140d", "text": "a School of Social Sciences, Faculty of Health, Education and Social Sciences, University of the West of Scotland, High St., Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom b School of Computing, Faculty of Science and Technology, University of the West of Scotland, Paisley Campus, Paisley PA1 2BE, Scotland, United Kingdom c School of Psychological Sciences and Health, Faculty of Humanities and Social Science, University of Strathclyde, Glasgow, Scotland, United Kingdom", "title": "" }, { "docid": "155c9444bfdb61352eddd7140ae75125", "text": "To the best of our knowledge, we present the first hardware implementation of isogeny-based cryptography available in the literature. Particularly, we present the first implementation of the supersingular isogeny Diffie-Hellman (SIDH) key exchange, which features quantum-resistance. We optimize this design for speed by creating a high throughput multiplier unit, taking advantage of parallelization of arithmetic in $\\mathbb {F}_{p^{2}}$ , and minimizing pipeline stalls with optimal scheduling. Consequently, our results are also faster than software libraries running affine SIDH even on Intel Haswell processors. For our implementation at 85-bit quantum security and 128-bit classical security, we generate ephemeral public keys in 1.655 million cycles for Alice and 1.490 million cycles for Bob. We generate the shared secret in an additional 1.510 million cycles for Alice and 1.312 million cycles for Bob. On a Virtex-7, these results are approximately 1.5 times faster than known software implementations running the same 512-bit SIDH. Our results and observations show that the isogeny-based schemes can be implemented with high efficiency on reconfigurable hardware.", "title": "" }, { "docid": "b1a9a691c39ab778dcdcaab502dd13b2", "text": "Point-of-Interest recommendation is an essential means to help people discover attractive locations, especially when people travel out of town or to unfamiliar regions. While a growing line of research has focused on modeling user geographical preferences for POI recommendation, they ignore the phenomenon of user interest drift across geographical regions, i.e., users tend to have different interests when they travel in different regions, which discounts the recommendation quality of existing methods, especially for out-of-town users. In this paper, we propose a latent class probabilistic generative model Spatial-Temporal LDA (ST-LDA) to learn region-dependent personal interests according to the contents of their checked-in POIs at each region. As the users' check-in records left in the out-of-town regions are extremely sparse, ST-LDA incorporates the crowd's preferences by considering the public's visiting behaviors at the target region. To further alleviate the issue of data sparsity, a social-spatial collective inference framework is built on ST-LDA to enhance the inference of region-dependent personal interests by effectively exploiting the social and spatial correlation information. Besides, based on ST-LDA, we design an effective attribute pruning (AP) algorithm to overcome the curse of dimensionality and support fast online recommendation for large-scale POI data. Extensive experiments have been conducted to evaluate the performance of our ST-LDA model on two real-world and large-scale datasets. The experimental results demonstrate the superiority of ST-LDA and AP, compared with the state-of-the-art competing methods, by making more effective and efficient mobile recommendations.", "title": "" }, { "docid": "43fc501b2bf0802b7c1cc8c4280dcd85", "text": "We propose a data-driven stochastic method (DSM) to study stochastic partial differential equations (SPDEs) in the multiquery setting. An essential ingredient of the proposed method is to construct a data-driven stochastic basis under which the stochastic solutions to the SPDEs enjoy a compact representation for a broad range of forcing functions and/or boundary conditions. Our method consists of offline and online stages. A data-driven stochastic basis is computed in the offline stage using the Karhunen–Loève (KL) expansion. A two-level preconditioning optimization approach and a randomized SVD algorithm are used to reduce the offline computational cost. In the online stage, we solve a relatively small number of coupled deterministic PDEs by projecting the stochastic solution into the data-driven stochastic basis constructed offline. Compared with a generalized polynomial chaos method (gPC), the ratio of the computational complexities between DSM (online stage) and gPC is of order O((m/Np) ). Herem andNp are the numbers of elements in the basis used in DSM and gPC, respectively. Typically we expect m Np when the effective dimension of the stochastic solution is small. A timing model, which takes into account the offline computational cost of DSM, is constructed to demonstrate the efficiency of DSM. Applications of DSM to stochastic elliptic problems show considerable computational savings over traditional methods even with a small number of queries. We also provide a method for an a posteriori error estimate and error correction.", "title": "" }, { "docid": "31ed7a47aa5ca6cf55d4bc1fbb1413d5", "text": "This article depicts the results of a study carried out to ascertain the information pattern based on the sources used by graduate students from the Islamic Studies Academy submitted at the University of Malaya, Kuala Lumpur. A total of 14377 citations consisting of 54 doctoral dissertations from the Year 2005 to 2009 were examined using the citation analysis. The highest citations per dissertation was 684, while the lowest being 105 citations. The result shows that the materials used by graduate students in this field vary and are multidisciplinary by nature. Books were cited more than other forms of sources contributing 65%, where journal articles contributed 20%.Conference proceedings contributed 11%, dissertations and thesis 3% and other categories consisted of web sites, interviews and legal documents contributing 9%. These findings corroborate with previous citations done in the Humanities discipline. Among the most popular cited journals are in-house journals namely Jurnal Syariah and Jurnal Usuluddin. In addition, graduate students used a substantial amount of Malaysian language sources at the rate of 60%, Arabic language scholarships contributed to 40% of the total citations. Approximately 30% of all sources cited are over 10 years of age. Hence, this study provides valuable insights to guide librarians in understanding the sources used and serves as an analytic tool for the development of source collection in the library services.", "title": "" }, { "docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9544b2cc301e2e3f170f050de659dda4", "text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.", "title": "" }, { "docid": "99942af4e58325aeb3f733b04c337607", "text": "For E-commerce, such as online trade and interactions on the Internet are on the rise, a key issue is how to use simple and effective evaluation methods to accomplish trust decision-making for customers. It is well known subjective trust holds uncertainty like randomness and fuzziness. However, existing approaches commonly based on probability or fuzzy set theory cannot attach enough importance to uncertainty. To remedy this problem, a new quantificational subjective trust evaluation approach is proposed based on the cloud model. The subjective trust may be modeled with cloud model, and expected value and hyper-entropy of subjective cloud is used to evaluate the reputation of trust objects. Our experimental data shows that the method can effectively support subjective trust decision, which provides a helpful exploitation for the subjective trust evaluation.", "title": "" }, { "docid": "a245aca07bd707ee645cf5cb283e7c5e", "text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.", "title": "" }, { "docid": "960022742172d6d0e883a23c74d800ef", "text": "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.", "title": "" }, { "docid": "4daec6170f18cc8896411e808e53355f", "text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.", "title": "" }, { "docid": "0e1610c6b54a6e819b5557bcac0274cb", "text": "This work presents a novel broad-band dual-polarized microstrip patch antenna, which is fed by proximity coupling. The microstrip line with slotted ground plane is used at two ports to feed the patch antenna. By using only one patch, the prototype antenna yields a bandwidth of 22% and 21.3% at the input port 1 and 2, respectively. The isolation between two input ports is below -34 dB across the bandwidth. Good broadside radiation patterns are observed, and the cross-polar levels are below -21 dB at both E and H planes. Due to its simple structure, it is easy to form arrays by using this antenna as an element.", "title": "" }, { "docid": "e1b6cc1dbd518760c414cd2ddbe88dd5", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich", "title": "" }, { "docid": "a25fa0c0889b62b70bf95c16f9966cc4", "text": "We deal with the problem of document representation for the task of measuring semantic relatedness between documents. A document is represented as a compact concept graph where nodes represent concepts extracted from the document through references to entities in a knowledge base such as DBpedia. Edges represent the semantic and structural relationships among the concepts. Several methods are presented to measure the strength of those relationships. Concepts are weighted through the concept graph using closeness centrality measure which reflects their relevance to the aspects of the document. A novel similarity measure between two concept graphs is presented. The similarity measure first represents concepts as continuous vectors by means of neural networks. Second, the continuous vectors are used to accumulate pairwise similarity between pairs of concepts while considering their assigned weights. We evaluate our method on a standard benchmark for document similarity. Our method outperforms state-of-the-art methods including ESA (Explicit Semantic Annotation) while our concept graphs are much smaller than the concept vectors generated by ESA. Moreover, we show that by combining our concept graph with ESA, we obtain an even further improvement.", "title": "" }, { "docid": "8d5dca364cbe5e3825e2f267d1c41d50", "text": "This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using principal component analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.", "title": "" } ]
scidocsrr
a3c26b6b89eeeddb00d5a6a89d59faab
Deep Texture and Structure Aware Filtering Network for Image Smoothing
[ { "docid": "60eec67cd3b60258a6b3179c33279a22", "text": "We present a new efficient edge-preserving filter-“tree filter”-to achieve strong image smoothing. The proposed filter can smooth out high-contrast details while preserving major edges, which is not achievable for bilateral-filter-like techniques. Tree filter is a weighted-average filter, whose kernel is derived by viewing pixel affinity in a probabilistic framework simultaneously considering pixel spatial distance, color/intensity difference, as well as connectedness. Pixel connectedness is acquired by treating pixels as nodes in a minimum spanning tree (MST) extracted from the image. The fact that an MST makes all image pixels connected through the tree endues the filter with the power to smooth out high-contrast, fine-scale details while preserving major image structures, since pixels in small isolated region will be closely connected to surrounding majority pixels through the tree, while pixels inside large homogeneous region will be automatically dragged away from pixels outside the region. The tree filter can be separated into two other filters, both of which turn out to have fast algorithms. We also propose an efficient linear time MST extraction algorithm to further improve the whole filtering speed. The algorithms give tree filter a great advantage in low computational complexity (linear to number of image pixels) and fast speed: it can process a 1-megapixel 8-bit image at ~ 0.25 s on an Intel 3.4 GHz Core i7 CPU (including the construction of MST). The proposed tree filter is demonstrated on a variety of applications.", "title": "" }, { "docid": "87aedf5f9fe7a397ed1a2b6303bdd9b1", "text": "We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasirealtime testing on VGA-size input. Our method profits a broad set of applications, such as depth/RGB image restoration, completion, noise/artifact removal, edge refinement, image filtering, image enhancement and colorization.", "title": "" } ]
[ { "docid": "9415182c28d6c20768cfba247eb63bac", "text": "The aim of this paper is to perform the main part of the restructuring processes with Business Process Reengineering (BPR) methodology. The first step was to choose the processes for analysis. Two business processes, which occur in most of the manufacturing companies, have been selected. Afterwards, current state of these processes was examined. The conclusions were used to propose own changes in accordance with assumptions of the BPR. This was possible through modelling and simulation of selected processes with iGrafx modeling software.", "title": "" }, { "docid": "6bbf27088fb5185009c5555f8aceeb04", "text": "BACKGROUND\nGood prosthetic suspension system secures the residual limb inside the prosthetic socket and enables easy donning and doffing. This study aimed to introduce, evaluate and compare a newly designed prosthetic suspension system (HOLO) with the current suspension systems (suction, pin/lock and magnetic systems).\n\n\nMETHODS\nAll the suspension systems were tested (tensile testing machine) in terms of the degree of the shear strength and the patient's comfort. Nine transtibial amputees participated in this study. The patients were asked to use four different suspension systems. Afterwards, each participant completed a questionnaire for each system to evaluate their comfort. Furthermore, the systems were compared in terms of the cost.\n\n\nRESULTS\nThe maximum tensile load that the new system could bear was 490 N (SD, 5.5) before the system failed. Pin/lock, magnetic and suction suspension systems could tolerate loads of 580 N (SD, 8.5), 350.9 (SD, 7) and 310 N (SD, 8.4), respectively. Our subjects were satisfied with the new hook and loop system, particularly in terms of easy donning and doffing. Furthermore, the new system is considerably cheaper (35 times) than the current locking systems in the market.\n\n\nCONCLUSIONS\nThe new suspension system could successfully retain the prosthesis on the residual limb as a good alternative for lower limb amputees. In addition, the new system addresses some problems of the existing systems and is more cost effective than its counterparts.", "title": "" }, { "docid": "605b95e3c0448b5ce9755ce6289894d7", "text": "Website success hinges on how credible the consumers consider the information on the website. Unless consumers believe the website's information is credible, they are not likely to be willing to act on the advice and will not develop loyalty to the website. This paper reports on how individual differences and initial website impressions affect perceptions of information credibility of an unfamiliar advice website. Results confirm that several individual difference variables and initial impression variables (perceived reputation, perceived website quality, and willingness to explore the website) play an important role in developing information credibility of an unfamiliar website, with first impressions and individual differences playing equivalent roles. The study also confirms the import of information credibility by demonstrating it positively influences perceived usefulness, perceived site risk, willingness to act on website advice, and perceived consumer loyalty toward the website.", "title": "" }, { "docid": "682254fdd4f79a1c04ce5ded334c4d99", "text": "Measuring voice quality for telephony is not a new problem. However, packet-switched, best-effort networks such as the Internet present significant new challenges for the delivery of real-time voice traffic. Unlike the circuit-switched public switched telephone network (PSTN), Internet protocol (IP) networks guarantee neither sufficient bandwidth for the voice traffic nor a constant, acceptable delay. Dropped packets and varying delays introduce distortions not found in traditional telephony. In addition, if a low bitrate codec is used in voice over IP (VoIP) to achieve a high compression ratio, the original waveform can be significantly distorted. These new potential sources of signal distortion present significant challenges for objectively measuring speech quality. Measurement techniques designed for the PSTN may not perform well in VoIP environments. Our objective is to find a speech quality metric that accurately predicts subjective human perception under the conditions present in VoIP systems. To do this, we compared three types of measures: perceptually weighted distortion measures such as enhanced modified Bark spectral distance (EMBSD) and measuring normalizing blocks (MNB), word-error rates of continuous speech recognizers, and the ITU E-model. We tested the performance of these measures under conditions typical of a VoIP system. We found that the E-model had the highest correlation with mean opinion scores (MOS). The E-model is well-suited for online monitoring because it does not require the original (undistorted) signal to compute its quality metric and because it is computationally simple.", "title": "" }, { "docid": "6b5e9fa6f81e311dcd5e8154b64a111c", "text": "Silicon Carbide (SiC) devices and modules have been developed with high blocking voltages for Medium Voltage power electronics applications. Silicon devices do not exhibit higher blocking voltage capability due to its relatively low band gap energy compared to SiC counterparts. For the first time, 12kV SiC IGBTs have been fabricated. These devices exhibit excellent switching and static characteristics. A Three-level Neutral Point Clamped Voltage Source Converter (3L-NPC VSC) has been simulated with newly developed SiC IGBTs. This 3L-NPC Converter is used as a 7.2kV grid interface for the solid state transformer and STATCOM operation. Also a comparative study is carried out with 3L-NPC VSC simulated with 10kV SiC MOSFET and 6.5kV Silicon IGBT device data.", "title": "" }, { "docid": "e507c60b8eb437cbd6ca9692f1bf8727", "text": "We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.", "title": "" }, { "docid": "6af336fb0d0381b8fcb5f361b702de11", "text": "We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of di‚erent social groups. For example, current systems sometimes analyze the language of females and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identi€cation for tweets wriŠen in African-American English, and discuss implications of disparity in NLP.", "title": "" }, { "docid": "39cad8dd6ad23ad9d4f98f3905ac29c2", "text": "Estimating the disparity and normal direction of one pixel simultaneously, instead of only disparity, also known as 3D label methods, can achieve much higher subpixel accuracy in the stereo matching problem. However, it is extremely difficult to assign an appropriate 3D label to each pixel from the continuous label space $\\mathbb {R}^{3}$ while maintaining global consistency because of the infinite parameter space. In this paper, we propose a novel algorithm called PatchMatch-based superpixel cut to assign 3D labels of an image more accurately. In order to achieve robust and precise stereo matching between local windows, we develop a bilayer matching cost, where a bottom–up scheme is exploited to design the two layers. The bottom layer is employed to measure the similarity between small square patches locally by exploiting a pretrained convolutional neural network, and then, the top layer is developed to assemble the local matching costs in large irregular windows induced by the tangent planes of object surfaces. To optimize the spatial smoothness of local assignments, we propose a novel strategy to update 3D labels. In the procedure of optimization, both segmentation information and random refinement of PatchMatch are exploited to update candidate 3D label set for each pixel with high probability of achieving lower loss. Since pairwise energy of general candidate label sets violates the submodular property of graph cut, we propose a novel multilayer superpixel structure to group candidate label sets into candidate assignments, which thereby can be efficiently fused by $\\alpha $ -expansion graph cut. Extensive experiments demonstrate that our method can achieve higher subpixel accuracy in different data sets, and currently ranks first on the new challenging Middlebury 3.0 benchmark among all the existing methods.", "title": "" }, { "docid": "420719690b6249322927153daedba87b", "text": "• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.", "title": "" }, { "docid": "2159c89f9f0ef91f8ee99f34027eeed9", "text": "Mobile Edge Computing (MEC) provides an efficient solution for IoT as it brings the cloud services close to the IoT device. This works well for IoT devices with limited mobility. IoT devices that are mobile by nature introduce a set of challenges to the MEC model. Challenges include security and efficiency aspects. Achieving mutual authentication of IoT device with the cloud edge provider is essential to protect from many security threats. Also, the efficiency of data transmission when connecting to a new cloud edge provider requires efficient data mobility among MEC providers or MEC centers. This research paper proposes a new framework that offers a secure and efficient MEC for IoT applications with mobile devices.", "title": "" }, { "docid": "53d41fb8e188add204ba96669715b49a", "text": "A nationwide survey was conducted to investigate the prevalence of video game addiction and problematic video game use and their association with physical and mental health. An initial sample comprising 2,500 individuals was randomly selected from the Norwegian National Registry. A total of 816 (34.0 percent) individuals completed and returned the questionnaire. The majority (56.3 percent) of respondents used video games on a regular basis. The prevalence of video game addiction was estimated to be 0.6 percent, with problematic use of video games reported by 4.1 percent of the sample. Gender (male) and age group (young) were strong predictors for problematic use of video games. A higher proportion of high frequency compared with low frequency players preferred massively multiplayer online role-playing games, although the majority of high frequency players preferred other game types. Problematic use of video games was associated with lower scores on life satisfaction and with elevated levels of anxiety and depression. Video game use was not associated with reported amount of physical exercise.", "title": "" }, { "docid": "ceda2e7fb5881c6b2080f09c226d99ba", "text": "Fraud detection has become an important issue to be explored. Fraud detection involves identifying fraud as quickly as possible once it has been perpetrated. Fraud is often a dynamic and challenging problem in Credit card lending business. Credit card fraud can be broadly classified into behavioral and application fraud, with behavioral fraud being the more prominent of the two. Supervised Modeling/Segmentation techniques are commonly used in fraud", "title": "" }, { "docid": "2dc084d063ec1610917e09921e145c24", "text": "This article describes an assistant interface to design and produce pop-up cards. A pop-up card is a piece of folded paper from which a three-dimensional structure pops up when opened. The authors propose an interface to assist the user in the design and production of a pop-up card. During the design process, the system examines whether the parts protrude from the card or whether the parts collide with one another when the card is closed. The user can concentrate on the design activity because the error occurrence and the error resolution are continuously fed to the user in real time. The authors demonstrate the features of their system by creating two pop-up card examples and perform an informal preliminary user study, showing that automatic protrusion and collision detection are effective in the design process. DOI: 10.4018/jcicg.2010070104 International Journal of Creative Interfaces and Computer Graphics, 1(2), 40-50, July-December 2010 41 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. start over from the beginning. This process requires a lot of time, energy, and paper. Design and simulation in a computer help both nonprofessionals and professionals to design a pop-up card, eliminate the boring repetition, and save time. Glassner (1998, 2002) proposed methods for designing a pop-up card on a computer. He introduced several simple pop-up mechanisms and described how to use these mechanisms, how to simulate the position of vertices as an intersecting point of three spheres, how to check whether the structure sticks out beyond the cover or if a collision occurs during opening, and how to generate templates. His work is quite useful in designing simple pop-up cards. In this article, we build on Glassner’s pioneering work and introduce several innovative aspects. We add two new mechanisms based on the V-fold: the box and the cube. We present a detailed description of the interface for design, which Glassner did not describe in any detail. In addition, our system provides real-time error detection feedback during editing operations by examining whether parts protrude from the card when closed or whether they collide with one another during opening and closing. Finally, we report on an informal preliminary user study of our system involving four inexperienced users.", "title": "" }, { "docid": "7eff2743d36414e3f008be72598bfd8e", "text": "BACKGROUND\nPsychiatry has been consistently shown to be a profession characterised by 'high-burnout'; however, no nationwide surveys on this topic have been conducted in Japan.\n\n\nAIMS\nThe objective of this study was to estimate the prevalence of burnout and to ascertain the relationship between work environment satisfaction, work-life balance satisfaction and burnout among psychiatrists working in medical schools in Japan.\n\n\nMETHOD\nWe mailed anonymous questionnaires to all 80 psychiatry departments in medical schools throughout Japan. Work-life satisfaction, work-environment satisfaction and social support assessments, as well as the Maslach Burnout Inventory (MBI), were used.\n\n\nRESULTS\nSixty psychiatric departments (75.0%) responded, and 704 psychiatrists provided answers to the assessments and MBI. Half of the respondents (n = 311, 46.0%) experienced difficulty with their work-life balance. Based on the responses to the MBI, 21.0% of the respondents had a high level of emotional exhaustion, 12.0% had a high level of depersonalisation, and 72.0% had a low level of personal accomplishment. Receiving little support, experiencing difficulty with work-life balance, and having less work-environment satisfaction were significantly associated with higher emotional exhaustion. A higher number of nights worked per month was significantly associated with higher depersonalisation.\n\n\nCONCLUSIONS\nA low level of personal accomplishment was quite prevalent among Japanese psychiatrists compared with the results of previous studies. Poor work-life balance was related to burnout, and social support was noted to mitigate the impact of burnout.", "title": "" }, { "docid": "8db6d52ee2778d24c6561b9158806e84", "text": "Surface fuctionalization plays a crucial role in developing efficient nanoparticulate drug-delivery systems by improving their therapeutic efficacy and minimizing adverse effects. Here we propose a simple layer-by-layer self-assembly technique capable of constructing mesoporous silica nanoparticles (MSNs) into a pH-responsive drug delivery system with enhanced efficacy and biocompatibility. In this system, biocompatible polyelectrolyte multilayers of alginate/chitosan were assembled on MSN's surface to achieve pH-responsive nanocarriers. The functionalized MSNs exhibited improved blood compatibility over the bare MSNs in terms of low hemolytic and cytotoxic activity against human red blood cells. As a proof-of-concept, the anticancer drug doxorubicin (DOX) was loaded into nanocarriers to evaluate their use for the pH-responsive drug release both in vitro and in vivo. The DOX release from nanocarriers was pH dependent, and the release rate was much faster at lower pH than that of at higher pH. The in vitro evaluation on HeLa cells showed that the DOX-loaded nanocarriers provided a sustained intracellular DOX release and a prolonged DOX accumulation in the nucleus, thus resulting in a prolonged therapeutic efficacy. In addition, the pharmacokinetic and biodistribution studies in healthy rats showed that DOX-loaded nanocarriers had longer systemic circulation time and slower plasma elimination rate than free DOX. The histological results also revealed that the nanocarriers had good tissue compatibility. Thus, the biocompatible multilayers functionalized MSNs hold the substantial potential to be further developed as effective and safe drug-delivery carriers.", "title": "" }, { "docid": "574aca6aa63dd17949fcce6a231cf2d3", "text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.", "title": "" }, { "docid": "aba0d28e9f1a138e569aa2525781e84d", "text": "A compact coplanar waveguide (CPW) monopole antenna is presented, comprising a fractal radiating patch in which a folded T-shaped element (FTSE) is embedded. The impedance match of the antenna is determined by the number of fractal unit cells, and the FTSE provides the necessary band-notch functionality. The filtering property can be tuned finely by controlling of length of FTSE. Inclusion of a pair of rectangular notches in the ground plane is shown to extend the antenna's impedance bandwidth for ultrawideband (UWB) performance. The antenna's parameters were investigated to fully understand their affect on the antenna. Salient parameters obtained from this analysis enabled the optimization of the antenna's overall characteristics. Experimental and simulation results demonstrate that the antenna exhibits the desired VSWR level and radiation patterns across the entire UWB frequency range. The measured results showed the antenna operates over a frequency band between 2.94–11.17 GHz with fractional bandwidth of 117% for <formula formulatype=\"inline\"><tex Notation=\"TeX\">${\\rm VSWR} \\leq 2$</tex></formula>, except at the notch band between 3.3–4.2 GHz. The antenna has dimensions of 14<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times \\,$</tex> </formula>1 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{3}$</tex> </formula>.", "title": "" }, { "docid": "1cbdf72cbb83763040abedb74748f6cd", "text": "Cyber attack is one of the most rapidly growing threats to the world of cutting edge information technology. As new tools and techniques are emerging everyday to make information accessible over the Internet, so is their vulnerabilities. Cyber defense is inevitable in order to ensure reliable and secure communication and transmission of information. Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are the major technologies dominating in the area of cyber defense. Tremendous efforts have already been put in intrusion detection research for decades but intrusion prevention research is still in its infancy. This paper provides a comprehensive review of the current research in both Intrusion Detection Systems and recently emerged Intrusion Prevention Systems. Limitations of current research works in both fields are also discussed in conclusion.", "title": "" }, { "docid": "ce8de212a3ef98f8e8bd391e731108af", "text": "Direct democracy is often proposed as a possible solution to the 21st-century problems of democracy. However, this suggestion clashes with the size and complexity of 21st-century societies, entailing an excessive cognitive burden on voters, who would have to submit informed opinions on an excessive number of issues. In this paper I argue for the development of “voting avatars”, autonomous agents debating and voting on behalf of each citizen. Theoretical research from artificial intelligence, and in particular multiagent systems and computational social choice, proposes 21st-century techniques for this purpose, from the compact representation of a voter’s preferences and values, to the development of voting procedures for autonomous agents use only.", "title": "" }, { "docid": "5d4797cffc06cbde079bf4019dc196db", "text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)&#x2014;a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.", "title": "" } ]
scidocsrr
ec74bf2fedc7fd1ae83658c9d7d0dc61
A field study of API learning obstacles
[ { "docid": "639ef3a979e916a6e38b32243235b73a", "text": "Little is known about the specific kinds of questions programmers ask when evolving a code base and how well existing tools support those questions. To better support the activity of programming, answers are needed to three broad research questions: 1) What does a programmer need to know about a code base when evolving a software system? 2) How does a programmer go about finding that information? 3) How well do existing tools support programmers in answering those questions? We undertook two qualitative studies of programmers performing change tasks to provide answers to these questions. In this paper, we report on an analysis of the data from these two user studies. This paper makes three key contributions. The first contribution is a catalog of 44 types of questions programmers ask during software evolution tasks. The second contribution is a description of the observed behavior around answering those questions. The third contribution is a description of how existing deployed and proposed tools do, and do not, support answering programmers' questions.", "title": "" } ]
[ { "docid": "74eb19a956a8910fbfd50090fb04946c", "text": "In this paper, we explore student dropout behavior in Massive Open Online Courses(MOOC). We use as a case study a recent Coursera class from which we develop a survival model that allows us to measure the influence of factors extracted from that data on student dropout rate. Specifically we explore factors related to student behavior and social positioning within discussion forums using standard social network analytic techniques. The analysis reveals several significant predictors of dropout.", "title": "" }, { "docid": "6f31b0ba60dccb6f1c4ac3e4161f8a44", "text": "In this work, we propose an alternative solution for parallel wave generation by WaveNet. In contrast to parallel WaveNet (Oord et al., 2018), we distill a Gaussian inverse autoregressive flow from the autoregressive WaveNet by minimizing a novel regularized KL divergence between their highly-peaked output distributions. Our method computes the KL divergence in closed-form, which simplifies the training algorithm and provides very efficient distillation. In addition, we propose the first text-to-wave neural architecture for speech synthesis, which is fully convolutional and enables fast end-to-end training from scratch. It significantly outperforms the previous pipeline that connects a text-to-spectrogram model to a separately trained WaveNet (Ping et al., 2018). We also successfully distill a parallel waveform synthesizer conditioned on the hidden representation in this end-to-end model. 2", "title": "" }, { "docid": "289849c6cb55ed61d28c8fe5132fedde", "text": "An adaptive multilevel wavelet collocation method for solving multi-dimensional elliptic problems with localized structures is described. The method is based on multi-dimensional second generation wavelets, and is an extension of the dynamically adaptive second generation wavelet collocation method for evolution problems [Int. J. Comp. Fluid Dyn. 17 (2003) 151]. Wavelet decomposition is used for grid adaptation and interpolation, while a hierarchical finite difference scheme, which takes advantage of wavelet multilevel decomposition, is used for derivative calculations. The multilevel structure of the wavelet approximation provides a natural way to obtain the solution on a near optimal grid. In order to accelerate the convergence of the solver, an iterative procedure analogous to the multigrid algorithm is developed. The overall computational complexity of the solver is O(N ), where N is the number of adapted grid points. The accuracy and computational efficiency of the method are demonstrated for the solution of twoand three-dimensional elliptic test problems.", "title": "" }, { "docid": "a51803d5c0753f64f5216d2cc225d172", "text": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels.", "title": "" }, { "docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7", "text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.", "title": "" }, { "docid": "353500d18d56c0bf6dc13627b0517f41", "text": "In order to accelerate the learning process in high dimensional reinforcement learning problems, TD methods such as Q-learning and Sarsa are usually combined with eligibility traces. The recently introduced DQN (Deep Q-Network) algorithm, which is a combination of Q-learning with a deep neural network, has achieved good performance on several games in the Atari 2600 domain. However, the DQN training is very slow and requires too many time steps to converge. In this paper, we use the eligibility traces mechanism and propose the deep Q(λ) network algorithm. The proposed method provides faster learning in comparison with the DQN method. Empirical results on a range of games show that the deep Q(λ) network significantly reduces learning time.", "title": "" }, { "docid": "6fe413cf75a694217c30a9ef79fab589", "text": "Zusammenfassung) Biometrics have been used for secure identification and authentication for more than two decades since biometric data is unique, non-transferable, unforgettable, and always with us. Recently, biometrics has pervaded other aspects of security applications that can be listed under the topic of “Biometric Cryptosystems”. Although the security of some of these systems is questionable when they are utilized alone, integration with other technologies such as digital signatures or Identity Based Encryption (IBE) schemes results in cryptographically secure applications of biometrics. It is exactly this field of biometric cryptosystems that we focused in this thesis. In particular, our goal is to design cryptographic protocols for biometrics in the framework of a realistic security model with a security reduction. Our protocols are designed for biometric based encryption, signature and remote authentication. We first analyze the recently introduced biometric remote authentication schemes designed according to the security model of Bringer et al.. In this model, we show that one can improve the database storage cost significantly by designing a new architecture, which is a two-factor authentication protocol. This construction is also secure against the new attacks we present, which disprove the claimed security of remote authentication schemes, in particular the ones requiring a secure sketch. Thus, we introduce a new notion called “Weak-identity Privacy” and propose a new construction by combining cancelable biometrics and distributed remote authentication in order to obtain a highly secure biometric authentication system. We continue our research on biometric remote authentication by analyzing the security issues of multi-factor biometric authentication (MFBA). We formally describe the security model for MFBA that captures simultaneous attacks against these systems and define the notion of user privacy, where the goal of the adversary is to impersonate a client to the server. We design a new protocol by combining bipartite biotokens, homomorphic encryption and zero-knowledge proofs and provide a security reduction to achieve user privacy. The main difference of this MFBA protocol is that the server-side computations are performed in the encrypted domain but without requiring a decryption key for the authentication decision of the server. Thus, leakage of the secret key of any system component does not affect the security of the scheme as opposed to the current biometric systems involving crypto-", "title": "" }, { "docid": "ccd356a943f19024478c42b5db191293", "text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.", "title": "" }, { "docid": "e5667a65bc628b93a1d5b0e37bfb8694", "text": "The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving things rather than stuff. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.", "title": "" }, { "docid": "12363093cb0441e0817d4c92ab88e7fb", "text": "Imperforate hymen, a condition in which the hymen has no aperture, usually occurs congenitally, secondary to failure of development of a lumen. A case of a documented simulated \"acquired\" imperforate hymen is presented in this article. The patient, a 5-year-old girl, was the victim of sexual abuse. Initial examination showed tears, scars, and distortion of the hymen, laceration of the perineal body, and loss of normal anal tone. Follow-up evaluations over the next year showed progressive healing. By 7 months after the injury, the hymen was replaced by a thick, opaque scar with no orifice. Patients with an apparent imperforate hymen require a sensitive interview and careful visual inspection of the genital and anal areas to delineate signs of injury. The finding of an apparent imperforate hymen on physical examination does not eliminate the possibility of antecedent vaginal penetration and sexual abuse.", "title": "" }, { "docid": "81b8c8490d47eea2b73b1a368d17d4b2", "text": "With the emergence of online social networks, the social network-based recommendation approach is popularly used. The major benefit of this approach is the ability of dealing with the problems with cold-start users. In addition to social networks, user trust information also plays an important role to obtain reliable recommendations. Although matrix factorization (MF) becomes dominant in recommender systems, the recommendation largely relies on the initialization of the user and item latent feature vectors. Aiming at addressing these challenges, we develop a novel trust-based approach for recommendation in social networks. In particular, we attempt to leverage deep learning to determinate the initialization in MF for trust-aware social recommendations and to differentiate the community effect in user’s trusted friendships. A two-phase recommendation process is proposed to utilize deep learning in initialization and to synthesize the users’ interests and their trusted friends’ interests together with the impact of community effect for recommendations. We perform extensive experiments on real-world social network data to demonstrate the accuracy and effectiveness of our proposed approach in comparison with other state-of-the-art methods.", "title": "" }, { "docid": "893a8c073b8bd935fbea419c0f3e0b17", "text": "Computing as a service model in cloud has encouraged High Performance Computing to reach out to wider scientific and industrial community. Many small and medium scale HPC users are exploring Infrastructure cloud as a possible platform to run their applications. However, there are gaps between the characteristic traits of an HPC application and existing cloud scheduling algorithms. In this paper, we propose an HPC-aware scheduler and implement it atop Open Stack scheduler. In particular, we introduce topology awareness and consideration for homogeneity while allocating VMs. We demonstrate the benefits of these techniques by evaluating them on a cloud setup on Open Cirrus test-bed.", "title": "" }, { "docid": "0879399fcb38c103a0e574d6d9010215", "text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.", "title": "" }, { "docid": "1f4d29037bdb9da92843ca6ce4ab592d", "text": "Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs.", "title": "" }, { "docid": "09c7331d77c5a9a2812df90e6e9256ea", "text": "We present a technique for approximating a light probe image as a constellation of light sources based on a median cut algorithm. The algorithm is efficient, simple to implement, and can realistically represent a complex lighting environment with as few as 64 point light sources.", "title": "" }, { "docid": "79351983ed6ba7bd3400b1a08c458fde", "text": "The intranuclear location of genomic loci and the dynamics of these loci are important parameters for understanding the spatial and temporal regulation of gene expression. Recently it has proven possible to visualize endogenous genomic loci in live cells by the use of transcription activator-like effectors (TALEs), as well as modified versions of the bacterial immunity clustered regularly interspersed short palindromic repeat (CRISPR)/CRISPR-associated protein 9 (Cas9) system. Here we report the design of multicolor versions of CRISPR using catalytically inactive Cas9 endonuclease (dCas9) from three bacterial orthologs. Each pair of dCas9-fluorescent proteins and cognate single-guide RNAs (sgRNAs) efficiently labeled several target loci in live human cells. Using pairs of differently colored dCas9-sgRNAs, it was possible to determine the intranuclear distance between loci on different chromosomes. In addition, the fluorescence spatial resolution between two loci on the same chromosome could be determined and related to the linear distance between them on the chromosome's physical map, thereby permitting assessment of the DNA compaction of such regions in a live cell.", "title": "" }, { "docid": "a928aa788221fc7f9a13d05a9d36badf", "text": "Segment routing is an emerging traffic engineering technique relying on Multi-protocol Label-Switched (MPLS) label stacking to steer traffic using the source-routing paradigm. Traffic flows are enforced through a given path by applying a specifically designed stack of labels (i.e., the segment list). Each packet is then forwarded along the shortest path toward the network element represented by the top label. Unlike traditional MPLS networks, segment routing maintains a per-flow state only at the ingress node; no signaling protocol is required to establish new flows or change the routing of active flows. Thus, control plane scalability is greatly improved. Several segment routing use cases have recently been proposed. As an example, it can be effectively used to dynamically steer traffic flows on paths characterized by low latency values. However, this may suffer from some potential issues. Indeed, deployed MPLS equipment typically supports a limited number of stacked labels. Therefore, it is important to define the proper procedures to minimize the required segment list depth. This work is focused on two relevant segment routing use cases: dynamic traffic recovery and traffic engineering in multi-domain networks. Indeed, in both use cases, the utilization of segment routing can significantly simplify the network operation with respect to traditional Internet Protocol (IP)/MPLS procedures. Thus, two original procedures based on segment routing are proposed for the aforementioned use cases. Both procedures are evaluated including a simulative analysis of the segment list depth. Moreover, an experimental demonstration is performed in a multi-layer test bed exploiting a software-defined-networking-based implementation of segment routing.", "title": "" }, { "docid": "4c165c15a3c6f069f702a54d0dab093c", "text": "We propose a simple method for improving the security of hashed passwords: the maintenance of additional ``honeywords'' (false passwords) associated with each user's account. An adversary who steals a file of hashed passwords and inverts the hash function cannot tell if he has found the password or a honeyword. The attempted use of a honeyword for login sets off an alarm. An auxiliary server (the ``honeychecker'') can distinguish the user password from honeywords for the login routine, and will set off an alarm if a honeyword is submitted.", "title": "" }, { "docid": "afe1be9e13ca6e2af2c5177809e7c893", "text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].", "title": "" } ]
scidocsrr
af9985d0bbdd7ed220ef19db4974a657
Direct Torque Control for Induction Motor Using Fuzzy Logic
[ { "docid": "75e9b017838ccfdcac3b85030470a3bd", "text": "The new \"Direct Self-Control\" (DSC) is a simple method of signal processing, which gives converter fed three-phase machines an excellent dynamic performance. To control the torque e.g. of an induction motor it is sufficient to process the measured signals of the stator currents and the total flux linkages only. Optimal performance of drive systems is accomplished in steady state as well as under transient conditions by combination of several two limits controls. The expenses are less than in the case of proposed predictive control systems or FAM, if the converters switching frequency has to be kept minimal.", "title": "" } ]
[ { "docid": "67f37768d01c6f445fe069a31e99b8e2", "text": "WELCOME TO CLOUD TIDBITS! In each issue, I'll be looking at a different “tidbit” of technology that I consider unique or eye-catching and of particular interest to IEEE Cloud Computing readers. Today's tidbit is VoltDB, a new cloud database. This system caught my eye for several reasons. First, it's the latest database designed by Michael Stonebraker, the database pioneer best known for Ingres, PostgreSQL, Illustra, Streambase, and more recently, Vertica. But interestingly, in this goaround, Stonebraker declared that he has thrown “all previous database architecture out the window” and “started over with a complete rewrite.”1 What's resulted is something totally different from every other database-including all the columnand table-oriented NoSQL systems. Moreover, VoltDB claims a 50 to 100x speed improvement over other relational database management systems (RDBMSs) and NoSQL systems. It sounds too good to be true. What we have is nothing short of a whole class of SQL, as compared to the “NoSQL” compromises detailed above. This “total rearchitecture,” called NewSQL, supports 100 percent in memory operation, supports SQL and stored procedures, and has a loosely coupled scale-out capability perfectly matched to cloud computing platforms. Wait a minute! That doesn't sound possible. That's precisely why I thought it made for a perfect tidbit.", "title": "" }, { "docid": "099bd9e751b8c1e3a07ee06f1ba4b55b", "text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.", "title": "" }, { "docid": "a20b684deeb401855cbdc12cab90610a", "text": "A zero knowledge interactive proof system allows one person to convince another person of some fact without revealing the information about the proof. In particular, it does not enable the verifier to later convince anyone else that the prover has a proof of the theorem or even merely that the theorem is true (much less that he himself has a proof). This paper reviews the field of zero knowledge proof systems giving a brief overview of zero knowledge proof systems and the state of current research in this field.", "title": "" }, { "docid": "fb43cec4064dfad44d54d1f2a4981262", "text": "Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of know ledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relati on vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimpli fied loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose TransA, an adaptive metric approach for embedding, utilizing the metric learning idea s to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.", "title": "" }, { "docid": "ce35a38f1ab8264554ca19fbe8017b82", "text": "Since the BOSS competition, in 2010, most steganalysis approaches use a learning methodology involving two steps: feature extraction, such as the Rich Models (RM), for the image representation, and use of the Ensemble Classifier (EC) for the learning step. In 2015, Qian et al. have shown that the use of a deep learning approach that jointly learns and computes the features, was very promising for the steganalysis. In this paper, we follow-up the study of Qian et al., and show that in the scenario where the steganograph always uses the same embedding key for embedding with the simulator in the different images, due to intrinsic joint minimization and the preservation of spatial information, the results obtained from a Convolutional Neural Network (CNN) or a Fully Connected Neural Network (FNN), if well parameterized, surpass the conventional use of a RM with an EC. First, numerous experiments were conducted in order to find the best ”shape” of the CNN. Second, experiments were carried out in the clairvoyant scenario in order to compare the CNN and FNN to an RM with an EC. The results show more than 16% reduction in the classification error with our CNN or FNN. Third, experiments were also performed in a cover-source mismatch setting. The results show that the CNN and FNN are naturally robust to the mismatch problem. In Addition to the experiments, we provide discussions on the internal mechanisms of a CNN, and weave links with some previously stated ideas, in order to understand the results we obtained. We also have a discussion on the scenario ”same embedding key”.", "title": "" }, { "docid": "4f3fe8ea0487690b4a8f61b488e96d53", "text": "Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.", "title": "" }, { "docid": "02ffa1b39ac9e76239eff040121938a3", "text": "Machine learning can be utilized in many different ways in the field of automatic manufacturing and logistics. In this thesis supervised machine learning have been utilized to train a classifiers for detection and recognition of objects in images. The techniques AdaBoost and Random forest have been examined, both are based on decision trees. The thesis has considered two applications: barcode detection and optical character recognition (OCR). Supervised machine learning methods are highly appropriate in both applications since both barcodes and printed characters generally are rather distinguishable. The first part of this thesis examines the use of machine learning for barcode detection in images, both traditional 1D-barcodes and the more recent Maxi-codes, which is a type of two-dimensional barcode. In this part the focus has been to train classifiers with the technique AdaBoost. The Maxi-code detection is mainly done with Local binary pattern features. For detection of 1D-codes, features are calculated from the structure tensor. The classifiers have been evaluated with around 200 real test images, containing barcodes, and shows promising results. The second part of the thesis involves optical character recognition. The focus in this part has been to train a Random forest classifier by using the technique point pair features. The performance has also been compared with the more proven and widely used Haar-features. Although, the result shows that Haar-features are superior in terms of accuracy. Nevertheless the conclusion is that point pairs can be utilized as features for Random forest in OCR.", "title": "" }, { "docid": "c6cfc50062e42f51c9ac0db3b4faed83", "text": "We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model. Of independent interest, we present the notion of a committed proof.", "title": "" }, { "docid": "d483da5197688c5deede276b63d81867", "text": "We present a stochastic model of the daily operations of an airline. Its primary purpose is to evaluate plans, such as crew schedules, as well as recovery policies in a random environment. We describe the structure of the stochastic model, sources of disruptions, recovery policies, and performance measures. Then, we describe SimAir—our simulation implementation of the stochastic model, and we give computational results. Finally, we give future directions for the study of airline recovery policies and planning under uncertainty.", "title": "" }, { "docid": "d212f981eb8cc6054b2651009179b722", "text": "A sixth-order 10.7-MHz bandpass switched-capacitor filter based on a double terminated ladder filter is presented. The filter uses a multipath operational transconductance amplifier (OTA) that presents both better accuracy and higher slew rate than previously reported Class-A OTA topologies. Design techniques based on charge cancellation and slower clocks are used to reduce the overall capacitance from 782 down to 219 unity capacitors. The filter's center frequency and bandwidth are 10.7 MHz and 400 kHz, respectively, and a passband ripple of 1 dB in the entire passband. The quality factor of the resonators used as filter terminations is around 32. The measured (filter + buffer) third-intermodulation (IM3) distortion is less than -40 dB for a two-tone input signal of +3-dBm power level each. The signal-to-noise ratio is roughly 58 dB while the IM3 is -45 dB; the power consumption for the standalone filter is 42 mW. The chip was fabricated in a 0.35-mum CMOS process; filter's area is 0.84 mm2", "title": "" }, { "docid": "7381d61eea849ecdf74c962042d0c5ff", "text": "Unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) is very important for battlefield awareness. For SAR systems mounted on a UAV, the motion errors can be considerably high due to atmospheric turbulence and aircraft properties, such as its small size, which makes motion compensation (MOCO) in UAV SAR more urgent than other SAR systems. In this paper, based on 3-D motion error analysis, a novel 3-D MOCO method is proposed. The main idea is to extract necessary motion parameters, i.e., forward velocity and displacement in line-of-sight direction, from radar raw data, based on an instantaneous Doppler rate estimate. Experimental results show that the proposed method is suitable for low- or medium-altitude UAV SAR systems equipped with a low-accuracy inertial navigation system.", "title": "" }, { "docid": "f60426bdd66154a7d2cb6415abd8f233", "text": "In the rapidly expanding field of parallel processing, job schedulers are the “operating systems” of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.", "title": "" }, { "docid": "1ade3a53c754ec35758282c9c51ced3d", "text": "Radical hysterectomy represents the treatment of choice for FIGO stage IA2–IIA cervical cancer. It is associated with several serious complications such as urinary and anorectal dysfunction due to surgical trauma to the autonomous nervous system. In order to determine those surgical steps involving the risk of nerve injury during both classical and nerve-sparing radical hysterectomy, we investigated the relationships between pelvic fascial, vascular and nervous structures in a large series of embalmed and fresh female cadavers. We showed that the extent of potential denervation after classical radical hysterectomy is directly correlated with the radicality of the operation. The surgical steps that carry a high risk of nerve injury are the resection of the uterosacral and vesicouterine ligaments and of the paracervix. A nerve-sparing approach to radical hysterectomy for cervical cancer is feasible if specific resection limits, such as the deep uterine vein, are carefully identified and respected. However, a nerve-sparing surgical effort should be balanced with the oncological priorities of removal of disease and all its potential routes of local spread. L'hystérectomie radicale est le traitement de choix pour les cancers du col utérin de stade IA2–IIA de la Fédération Internationale de Gynécologie Obstétrique (FIGO). Cette intervention comporte plusieurs séquelles graves, telles que les dysfonctions urinaires ou ano-rectales, par traumatisme chirurgical des nerfs végétatifs pelviens. Pour mettre en évidence les temps chirurgicaux impliquant un risque de lésion nerveuse lors d'une hystérectomie radicale classique et avec préservation nerveuse, nous avons recherché les rapports entre le fascia pelvien, les structures vasculaires et nerveuses sur une large série de sujets anatomiques féminins embaumés et non embaumés. Nous avons montré que l'étendue de la dénervation potentielle après hystérectomie radicale classique était directement en rapport avec le caractère radical de l'intervention. Les temps chirurgicaux à haut risque pour des lésions nerveuses sont la résection des ligaments utéro-sacraux, des ligaments vésico-utérins et du paracervix. L'hystérectomie radicale avec préservation nerveuse est possible si des limites de résection spécifiques telle que la veine utérine profonde sont soigneusement identifiées et respectées. Cependant une chirurgie de préservation nerveuse doit être mise en balance avec les priorités carcinologiques d'exérèse du cancer et de toutes ses voies potentielles de dissémination locale.", "title": "" }, { "docid": "73d0013bb021a9ad2100cc8e3f938ec8", "text": "The rapid development of electron tomography, in particular the introduction of novel tomographic imaging modes, has led to the visualization and analysis of three-dimensional structural and chemical information from materials at the nanometre level. In addition, the phase information revealed in electron holograms allows electrostatic and magnetic potentials to be mapped quantitatively with high spatial resolution and, when combined with tomography, in three dimensions. Here we present an overview of the techniques of electron tomography and electron holography and demonstrate their capabilities with the aid of case studies that span materials science and the interface between the physical sciences and the life sciences.", "title": "" }, { "docid": "19b915816b9e93731b900f84bc40ad5b", "text": "It is a truth universally acknowledged that \"a picture is worth a thousand words\". The emerge of digital media has taken this saying to a complete new level. By using steganography, one can hide not only 1000, but thousands of words even in an average sized image. This article presents various types of techniques used by modern digital steganography, as well as the implementation of the least significant bit (LSB) method. The main objective is to develop an application that uses LSB insertion in order to encode data into a cover image. Both a serial and parallel version are proposed and an analysis of the performances is made using images ranging from 1:9 to 131 megapixels.", "title": "" }, { "docid": "b4a2c3679fe2490a29617c6a158b9dbc", "text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.", "title": "" }, { "docid": "97e5f2e774b58f7533242114e5e06159", "text": "We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.", "title": "" }, { "docid": "5ae07e0d3157b62f6d5e0e67d2b7f2ea", "text": "G. Francis and F. Hermens (2002) used computer simulations to claim that many current models of metacontrast masking can account for the findings of V. Di Lollo, J. T. Enns, and R. A. Rensink (2000). They also claimed that notions of reentrant processing are not necessary because all of V. Di Lollo et al. 's data can be explained by feed-forward models. The authors show that G. Francis and F. Hermens's claims are vitiated by inappropriate modeling of attention and by ignoring important aspects of V. Di Lollo et al. 's results.", "title": "" }, { "docid": "3f629998235c1cfadf67cf711b07f8b9", "text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.", "title": "" }, { "docid": "660f957b70e53819724e504ed3de0776", "text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
efa1a1abdec28d20e35262578d71ae34
Neighborhood Mixture Model for Knowledge Base Completion
[ { "docid": "a5b7253f56a487552ba3b0ce15332dd1", "text": "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as BornInCitypa, bq ^ CityInCountrypb, cq ùñ Nationalitypa, cq. We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics, and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-ofthe-art confidence-based rule mining approach in mining horn rules that involve compositional reasoning.", "title": "" }, { "docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0", "text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.", "title": "" }, { "docid": "8093219e7e2b4a7067f8d96118a5ea93", "text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.", "title": "" }, { "docid": "7072c7b94fc6376b13649ec748612705", "text": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.", "title": "" } ]
[ { "docid": "b0840d44b7ec95922eeed4ef71b338f9", "text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.", "title": "" }, { "docid": "b798103f64ec684a4d0f530c7add8eeb", "text": "Self-adaptation is a key feature of evolutionary algorithms (EAs). Although EAs have been used successfully to solve a wide variety of problems, the performance of this technique depends heavily on the selection of the EA parameters. Moreover, the process of setting such parameters is considered a time-consuming task. Several research works have tried to deal with this problem; however, the construction of algorithms letting the parameters adapt themselves to the problem is a critical and open problem of EAs. This work proposes a novel ensemble machine learning method that is able to learn rules, solve problems in a parallel way and adapt parameters used by its components. A self-adaptive ensemble machine consists of simultaneously working extended classifier systems (XCSs). The proposed ensemble machine may be treated as a meta classifier system. A new self-adaptive XCS-based ensemble machine was compared with two other XCSbased ensembles in relation to one-step binary problems: Multiplexer, One Counts, Hidden Parity, and randomly generated Boolean functions, in a noisy version as well. Results of the experiments have shown the ability of the model to adapt the mutation rate and the tournament size. The results are analyzed in detail.", "title": "" }, { "docid": "a856b4fc2ec126ee3709d21ff4c3c49c", "text": "In this work, glass fiber reinforced epoxy composites were fabricated. Epoxy resin was used as polymer matrix material and glass fiber was used as reinforcing material. The main focus of this work was to fabricate this composite material by the cheapest and easiest way. For this, hand layup method was used to fabricate glass fiber reinforced epoxy resin composites and TiO2 material was used as filler material. Six types of compositions were made with and without filler material keeping the glass fiber constant and changing the epoxy resin with respect to filler material addition. Mechanical properties such as tensile, impact, hardness, compression and flexural properties were investigated. Additionally, microscopic analysis was done. The experimental investigations show that without filler material the composites exhibit overall lower value in mechanical properties than with addition of filler material in the composites. The results also show that addition of filler material increases the mechanical properties but highest values were obtained for different filler material addition. From the obtained results, it was observed that composites filled by 15wt% of TiO2 particulate exhibited maximum tensile strength, 20wt% of TiO2 particulate exhibited maximum impact strength, 25wt% of TiO2 particulate exhibited maximum hardness value, 25wt% of TiO2 particulate exhibited maximum compressive strength, 20wt% of TiO2 particulate exhibited maximum flexural strength.", "title": "" }, { "docid": "9b7ca792de0889191567a47410cb2970", "text": "P2P online lending platforms have become increasingly developed. However, these platforms may suffer a serious loss caused by default behaviors of borrowers. In this paper, we present an effective default behavior prediction model to reduce default risk in P2P lending. The proposed model uses mobile phone usage data, which are generated from widely used mobile phones. We extract features from five aspects, including consumption, social network, mobility, socioeconomic, and individual attribute. Based on these features, we propose a joint decision model, which makes a default risk judgment through combining Random Forests with Light Gradient Boosting Machine. Validated by a real-world dataset collected by a mobile carrier and a P2P lending company in China, the proposed model not only demonstrates satisfactory performance on the evaluation metrics but also outperforms the existing methods in this area. Based on these results, the proposed model implies the high feasibility and potential to be adopted in real-world P2P online lending platforms.", "title": "" }, { "docid": "6f1e71399e5786eb9c3923a1e967cd8f", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "0537a00983f91942099d93a5a2c22195", "text": "Conflicting evidence exists regarding the optimal treatment for abscess complicating acute appendicitis. The objective of this study is to compare immediate appendectomy (IMM APP) versus expectant management (EXP MAN) including percutaneous drainage with or without interval appendectomy to treat periappendiceal abscess. One hundred four patients with acute appendicitis complicated by periappendiceal abscess were identified. We compared 36 patients who underwent IMM APP with 68 patients who underwent EXP MAN. Outcome measures included morbidity and length of hospital stay. The groups were similar with regard to age (30.6 +/- 12.3 vs. 34.8 +/- 13.5 years), gender (61% vs. 62% males), admission WBC count (17.5 +/- 5.1 x 10(3) vs. 17.0 +/- 4.8 x 10(3) cells/dL), and admission temperature (37.9 +/- 1.2 vs. 37.8 +/- 0.9 degrees F). IMM APP patients had a higher rate of complications than EXP MAN patients at initial hospitalization (58% vs. 15%, P < 0.001) and for all hospitalizations (67% vs. 24%, P < 0.001). The IMM APP group also had a longer initial (14.8 +/- 16.1 vs. 9.0 +/- 4.8 days, P = 0.01) and overall hospital stay (15.3 +/- 16.2 vs. 10.7 +/- 5.4 days, P = 0.04). We conclude that percutaneous drainage and interval appendectomy is preferable to immediate appendectomy for treatment of appendiceal abscess because it leads to a lower complication rate and a shorter hospital stay.", "title": "" }, { "docid": "0c4f02b3b361d60da1aec0f0c100dcf9", "text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.", "title": "" }, { "docid": "773b5914dce6770b2db707ff4536c7f6", "text": "This paper presents an automatic drowsy driver monitoring and accident prevention system that is based on monitoring the changes in the eye blink duration. Our proposed method detects visual changes in eye locations using the proposed horizontal symmetry feature of the eyes. Our new method detects eye blinks via a standard webcam in real-time at 110fps for a 320×240 resolution. Experimental results in the JZU [3] eye-blink database showed that the proposed system detects eye blinks with a 94% accuracy with a 1% false positive rate.", "title": "" }, { "docid": "7abe1fd1b0f2a89bf51447eaef7aa989", "text": "End users increasingly expect ubiquitous connectivity while on the move. With a variety of wireless access technologies available, we expect to always be connected to the technology that best matches our performance goals and price points. Meanwhile, sophisticated onboard units (OBUs) enable geolocation and complex computation in support of handover. In this paper, we present an overview of vertical handover techniques and propose an algorithm empowered by the IEEE 802.21 standard, which considers the particularities of the vehicular networks (VNs), the surrounding context, the application requirements, the user preferences, and the different available wireless networks [i.e., Wireless Fidelity (Wi-Fi), Worldwide Interoperability for Microwave Access (WiMAX), and Universal Mobile Telecommunications System (UMTS)] to improve users' quality of experience (QoE). Our results demonstrate that our approach, under the considered scenario, is able to meet application requirements while ensuring user preferences are also met.", "title": "" }, { "docid": "5bd168673acca10828a03cbfd80e8932", "text": "Since a biped humanoid inherently suffers from instability and always risks tipping itself over, ensuring high stability and reliability of walk is one of the most important goals. This paper proposes a walk control consisting of a feedforward dynamic pattern and a feedback sensory reflex. The dynamic pattern is a rhythmic and periodic motion, which satisfies the constraints of dynamic stability and ground conditions, and is generated assuming that the models of the humanoid and the environment are known. The sensory reflex is a simple, but rapid motion programmed in respect to sensory information. The sensory reflex we propose in this paper consists of the zero moment point reflex, the landing-phase reflex, and the body-posture reflex. With the dynamic pattern and the sensory reflex, it is possible for the humanoid to walk rhythmically and to adapt itself to the environmental uncertainties. The effectiveness of our proposed method was confirmed by dynamic simulation and walk experiments on an actual 26-degree-of-freedom humanoid.", "title": "" }, { "docid": "29a0e5ddd495b46b73ea71b1983fd73b", "text": "Data extraction from the web pages is the process of analyzing and retrieving relevant data out of the data sources (usually unstructured or poorly structure) in a specific pattern for further processing, involves addition of metadata and data integration details for further process in the data workflow. This survey describes overview of the different web data extraction and data alignment techniques. Extraction techniques are DeLa, DEPTA, ViPER, and ViNT. Data alignment techniques are Pairwise QRR alignment, Holistic alignment, Nested structure processing. Query Result pages are generated by using Web database based on Users Query. The data from these query result pages should be automatically extracted which is very important for many applications, such as data integration, which are needed to cooperate with multiple web databases. New method is proposed for data extraction t that combines both tag and value similarity. It automatically extracts data from query result pages by first identifying and segmenting the query result records (QRRs) in the query result pages and then aligning the segmented QRRs into a table. In which the data values from the same attribute are put into the same column. Data region identification method identify the noncontiguous QRRs that have the same parents according to their tag similarities. Specifically, we propose new techniques to handle the case when the QRRs are not contiguous, which may be due to presence of auxiliary information, such as a comment, recommendation or advertisement, and for handling any nested structure that may exist in the QRRs.", "title": "" }, { "docid": "1f752034b5307c0118d4156d0b95eab3", "text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.", "title": "" }, { "docid": "82af21c1e687d7303c06cef4b66f1fb4", "text": "Strategic planning and talent management in large enterprises composed of knowledge workers requires complete, accurate, and up-to-date representation of the expertise of employees in a form that integrates with business processes. Like other similar organizations operating in dynamic environments, the IBM Corporation strives to maintain such current and correct information, specifically assessments of employees against job roles and skill sets from its expertise taxonomy. In this work, we deploy an analytics-driven solution that infers the expertise of employees through the mining of enterprise and social data that is not specifically generated and collected for expertise inference. We consider job role and specialty prediction and pose them as supervised classification problems. We evaluate a large number of feature sets, predictive models and postprocessing algorithms, and choose a combination for deployment. This expertise analytics system has been deployed for key employee population segments, yielding large reductions in manual effort and the ability to continually and consistently serve up-to-date and accurate data for several business functions. This expertise management system is in the process of being deployed throughout the corporation.", "title": "" }, { "docid": "b1d348e2095bd7054cc11bd84eb8ccdc", "text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.", "title": "" }, { "docid": "fe3aa62af7f769d25d51c60444be0907", "text": "Neurophysiological recording techniques are helping provide marketers and salespeople with an increased understanding of their targeted customers. Such tools are also providing information systems researchers more insight to their end-users. These techniques may also be used introspectively to help researchers learn more about their own techniques. Here we look to help salespeople have an increased understanding of their selling methods by looking through their eyes instead of through the eyes of the customer. A preliminary study is presented using electroencephalography of three sales experts while watching the first moments of a video of a sales pitch to understand mental processing during the approach phase. Follow on work is described and considerations for interpreting data in light of individual differences.", "title": "" }, { "docid": "e6f5c58910c877ade6594e206ac19e02", "text": "Model-based compression is an effective, facilitating, and expanded model of neural network models with limited computing and low power. However, conventional models of compression techniques utilize crafted features [2,3,12] and explore specialized areas for exploration and design of large spaces in terms of size, speed, and accuracy, which usually have returns Less and time is up. This paper will effectively analyze deep auto compression (ADC) and reinforcement learning strength in an effective sample and space design, and improve the compression quality of the model. The results of compression of the advanced model are obtained without any human effort and in a completely automated way. With a 4fold reduction in FLOP, the accuracy of 2.8% is higher than the manual compression model for VGG-16 in ImageNet.", "title": "" }, { "docid": "7ab5f56b615848ba5d8dc2f149fd8bf2", "text": "At present, most outdoor video-surveillance, driver-assistance and optical remote sensing systems have been designed to work under good visibility and weather conditions. Poor visibility often occurs in foggy or hazy weather conditions and can strongly influence the accuracy or even the general functionality of such vision systems. Consequently, it is important to import actual weather-condition data to the appropriate processing mode. Recently, significant progress has been made in haze removal from a single image [1,2]. Based on the hazy weather classification, specialized approaches, such as a dehazing process, can be employed to improve recognition. Figure 1 shows a sample processing flow of our dehazing program.", "title": "" }, { "docid": "855b80a4dd22e841c8a929b20eb6e002", "text": "Accuracy and stability of Kinect-like depth data is limited by its generating principle. In order to serve further applications with high quality depth, the preprocessing on depth data is essential. In this paper, we analyze the characteristics of the Kinect-like depth data by examing its generation principle and propose a spatial-temporal denoising algorithm taking into account its special properties. Both the intra-frame spatial correlation and the inter-frame temporal correlation are exploited to fill the depth hole and suppress the depth noise. Moreover, a divisive normalization approach is proposed to assist the noise filtering process. The 3D rendering results of the processed depth demonstrates that the lost depth is recovered in some hole regions and the noise is suppressed with depth features preserved.", "title": "" }, { "docid": "0b51b727f39a9c8ea6580794c6f1e2bb", "text": "Many researchers proposed different methodologies for the text skew estimation in binary images/gray scale images. They have been used widely for the skew identification of the printed text. There exist so many ways algorithms for detecting and correcting a slant or skew in a given document or image. Some of them provide better accuracy but are slow in speed, others have angle limitation drawback. So a new technique for skew detection in the paper, will reduce the time and cost. Keywords— Document image processing, Skew detection, Nearest-neighbour approach, Moments, Hough transformation.", "title": "" }, { "docid": "6c406578abde6104439470f9e3187c7e", "text": "Extended superficial musculoaponeurotic system (SMAS) rhytidectomy has been advocated for improving nasolabial fold prominence. Extended subSMAS dissection requires release of the SMAS typically from the upper lateral border of the zygomaticus major muscle and continued dissection medial to this muscle. This maneuver releases the zygomatic retaining ligaments and achieves more effective mobilization and elevation of the ptotic malar soft tissues, resulting in more dramatic effacement of the nasolabial crease. Despite its presumed advantages, few reports have suggested greater risk of nerve injury with this technique compared with other limited sub-SMAS dissection techniques. Although the caudal extent of the zygomaticus muscle insertion to the modiolus of the mouth has been well delineated, the more cephalad origin has been vaguely defined. We attempted to define anatomic landmarks which could serve to more reliably identify the upper extent of the lateral zygomaticus major muscle border and more safely guide extended sub-SMAS dissections. Bilateral zygomaticus major muscles were identified in 13 cadaver heads with 4.0-power loupe magnification. Bony anatomic landmarks were identified that would predict the location of the lateral border of the zygomaticus major muscle. The upper extent of the lateral border of the zygomaticus major muscle was defined in relation to an oblique line extending from the mental protuberance to the notch defined at the most anterior-inferior aspect of the temporal fossa at the junction of the frontal process and temporal process of the zygomatic bone. The lateral border of the zygomaticus major muscle was observed 4.4 +/- 2.2 mm lateral and parallel to this line. More accurate prediction of the location of the upper extent of the lateral border of the zygomaticus major muscle using the above bony anatomic landmarks may limit nerve injury during SMAS dissections in extended SMAS rhytidectomy.", "title": "" } ]
scidocsrr
68f2bf965191c6c8fede96c83c3894a6
Interpretable VAEs for nonlinear group factor analysis
[ { "docid": "db75809bcc029a4105dc12c63e2eca76", "text": "Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience. Using multi-modal magnetic resonance images from the Human Connectome Project (HCP) and an objective semi-automated neuroanatomical approach, we delineated 180 areas per hemisphere bounded by sharp changes in cortical architecture, function, connectivity, and/or topography in a precisely aligned group average of 210 healthy young adults. We characterized 97 new areas and 83 areas previously reported using post-mortem microscopy or other specialized study-specific approaches. To enable automated delineation and identification of these areas in new HCP subjects and in future studies, we trained a machine-learning classifier to recognize the multi-modal ‘fingerprint’ of each cortical area. This classifier detected the presence of 96.6% of the cortical areas in new subjects, replicated the group parcellation, and could correctly locate areas in individuals with atypical parcellations. The freely available parcellation and classifier will enable substantially improved neuroanatomical precision for studies of the structural and functional organization of human cerebral cortex and its variation across individuals and in development, aging, and disease.", "title": "" } ]
[ { "docid": "732e72f152075d47f6473910a2e98e9f", "text": "In this paper we describe the ForSpec Temporal Logic (FTL), the new temporal property-specification logic of ForSpec, Intel’s new formal specification language. The key features of FTL are as follows: it is a l inear temporal logic, based on Pnueli’s LTL, it is based on a rich set of logic al and arithmetical operations on bit vectors to describe state properties, it enables the user to define temporal connectives over time windows, it enables th user to define regular events, which are regular sequences of Boolean events, and then relate such events via special connectives, it enables the user to expre ss roperties about the past, and it includes constructs that enable the user to mode l multiple clock and reset signals, which is useful in the verification of hardwar e design.", "title": "" }, { "docid": "6ba537ef9dd306a3caaba63c2b48c222", "text": "A lumped-element circuit is proposed to model a coplanar waveguide (CPW) interdigital capacitor (IDC). Closed-form expressions suitable for CAD purposes are given for each element in the circuit. The obtained results for the series capacitance are in good agreement with those available in the literature. In addition, the scattering parameters obtained from the circuit model are compared with those obtained using the full-wave method of moments (MoM) and good agreement is obtained. Moreover, a multilayer feed-forward artificial neural network (ANN) is developed to model the capacitance of the CPW IDC. It is shown that the developed ANN has successfully learned the required task of evaluating the capacitance of the IDC. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15: 551–559, 2005.", "title": "" }, { "docid": "03fb57d2810ed42f7fe57f688db6fd57", "text": "This paper reviews some of the accomplishments in the field of robot dynamics research, from the development of the recursive Newton-Euler algorithm to the present day. Equations and algorithms are given for the most important dynamics computations, expressed in a common notation to facilitate their presentation and comparison.", "title": "" }, { "docid": "c2081b44d63490f2967517558065bdf0", "text": "The add-on battery pack in plug-in hybrid electric vehicles can be charged from an AC outlet, feed power back to the grid, provide power for electric traction, and capture regenerative energy when braking. Conventionally, three-stage bidirectional converter interfaces are used to fulfil these functions. In this paper, a single stage integrated converter is proposed based on direct AC/DC conversion theory. The proposed converter eliminates the full bridge rectifier, reduces the number of semiconductor switches and high current inductors, and improves the conversion efficiency.", "title": "" }, { "docid": "b8274589a145a94e19329b2640a08c17", "text": "Since 2004, many nations have started issuing “e-passports” containing an RFID tag that, when powered, broadcast information. It is claimed that these passports are more secure and that our data will be protected from any possible unauthorised attempts to read it. In this paper we show that there is a flaw in one of the passport’s protocols that makes it possible to trace the movements of a particular passport, without having to break the passport’s cryptographic key. All an attacker has to do is to record one session between the passport and a legitimate reader, then by replaying a particular message, the attacker can distinguish that passport from any other. We have implemented our attack and tested it successfully against passports issued by a range of nations.", "title": "" }, { "docid": "6ab38099b989f1d9bdc504c9b50b6bbe", "text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.", "title": "" }, { "docid": "3d3c04826eafd366401231aba984419b", "text": "INTRODUCTION\nDespite the known advantages of objective physical activity monitors (e.g., accelerometers), these devices have high rates of non-wear, which leads to missing data. Objective activity monitors are also unable to capture valuable contextual information about behavior. Adolescents recruited into physical activity surveillance and intervention studies will increasingly have smartphones, which are miniature computers with built-in motion sensors.\n\n\nMETHODS\nThis paper describes the design and development of a smartphone application (\"app\") called Mobile Teen that combines objective and self-report assessment strategies through (1) sensor-informed context-sensitive ecological momentary assessment (CS-EMA) and (2) sensor-assisted end-of-day recall.\n\n\nRESULTS\nThe Mobile Teen app uses the mobile phone's built-in motion sensor to automatically detect likely bouts of phone non-wear, sedentary behavior, and physical activity. The app then uses transitions between these inferred states to trigger CS-EMA self-report surveys measuring the type, purpose, and context of activity in real-time. The end of the day recall component of the Mobile Teen app allows users to interactively review and label their own physical activity data each evening using visual cues from automatically detected major activity transitions from the phone's built-in motion sensors. Major activity transitions are identified by the app, which cues the user to label that \"chunk,\" or period, of time using activity categories.\n\n\nCONCLUSION\nSensor-driven CS-EMA and end-of-day recall smartphone apps can be used to augment physical activity data collected by objective activity monitors, filling in gaps during non-wear bouts and providing additional real-time data on environmental, social, and emotional correlates of behavior. Smartphone apps such as these have potential for affordable deployment in large-scale epidemiological and intervention studies.", "title": "" }, { "docid": "076ad699191bd3df87443f427268222a", "text": "Robotic systems for disease detection in greenhouses are expected to improve disease control, increase yield, and reduce pesticide application. We present a robotic detection system for combined detection of two major threats of greenhouse bell peppers: Powdery mildew (PM) and Tomato spotted wilt virus (TSWV). The system is based on a manipulator, which facilitates reaching multiple detection poses. Several detection algorithms are developed based on principal component analysis (PCA) and the coefficient of variation (CV). Tests ascertain the system can successfully detect the plant and reach the detection pose required for PM (along the side of the plant), yet it has difficulties in reaching the TSWV detection pose (above the plant). Increasing manipulator work-volume is expected to solve this issue. For TSWV, PCA-based classification with leaf vein removal, achieved the highest classification accuracy (90%) while the accuracy of the CV methods was also high (85% and 87%). For PM, PCA-based pixel-level classification was high (95.2%) while leaf condition classification accuracy was low (64.3%) since it was determined based on the upper side of the leaf while disease symptoms start on its lower side. Exposure of the lower side of the leaf during detection is expected to improve PM condition detection.", "title": "" }, { "docid": "77362cc72d7a09dbbb0f067c11fe8087", "text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.", "title": "" }, { "docid": "883be979cd5e7d43ded67da1a40427ce", "text": "This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "ec3542685d1b6e71e523cdcafc59d849", "text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.", "title": "" }, { "docid": "d1eb1b18105d79c44dc1b6b3b2c06ee2", "text": "An implementation of high speed AES algorithm based on FPGA is presented in this paper in order to improve the safety of data in transmission. The mathematic principle, encryption process and logic structure of AES algorithm are introduced. So as to reach the porpose of improving the system computing speed, the pipelining and papallel processing methods were used. The simulation results show that the high-speed AES encryption algorithm implemented correctly. Using the method of AES encryption the data could be protected effectively.", "title": "" }, { "docid": "42961b66e41a155edb74cc4ab5493c9c", "text": "OBJECTIVE\nTo determine the preventive effect of manual lymph drainage on the development of lymphoedema related to breast cancer.\n\n\nDESIGN\nRandomised single blinded controlled trial.\n\n\nSETTING\nUniversity Hospitals Leuven, Leuven, Belgium.\n\n\nPARTICIPANTS\n160 consecutive patients with breast cancer and unilateral axillary lymph node dissection. The randomisation was stratified for body mass index (BMI) and axillary irradiation and treatment allocation was concealed. Randomisation was done independently from recruitment and treatment. Baseline characteristics were comparable between the groups.\n\n\nINTERVENTION\nFor six months the intervention group (n = 79) performed a treatment programme consisting of guidelines about the prevention of lymphoedema, exercise therapy, and manual lymph drainage. The control group (n = 81) performed the same programme without manual lymph drainage.\n\n\nMAIN OUTCOME MEASURES\nCumulative incidence of arm lymphoedema and time to develop arm lymphoedema, defined as an increase in arm volume of 200 mL or more in the value before surgery.\n\n\nRESULTS\nFour patients in the intervention group and two in the control group were lost to follow-up. At 12 months after surgery, the cumulative incidence rate for arm lymphoedema was comparable between the intervention group (24%) and control group (19%) (odds ratio 1.3, 95% confidence interval 0.6 to 2.9; P = 0.45). The time to develop arm lymphoedema was comparable between the two group during the first year after surgery (hazard ratio 1.3, 0.6 to 2.5; P = 0.49). The sample size calculation was based on a presumed odds ratio of 0.3, which is not included in the 95% confidence interval. This odds ratio was calculated as (presumed cumulative incidence of lymphoedema in intervention group/presumed cumulative incidence of no lymphoedema in intervention group)×(presumed cumulative incidence of no lymphoedema in control group/presumed cumulative incidence of lymphoedema in control group) or (10/90)×(70/30).\n\n\nCONCLUSION\nManual lymph drainage in addition to guidelines and exercise therapy after axillary lymph node dissection for breast cancer is unlikely to have a medium to large effect in reducing the incidence of arm lymphoedema in the short term. Trial registration Netherlands Trial Register No NTR 1055.", "title": "" }, { "docid": "43b2721bb2fb4e50e855c69ea147ffd1", "text": "Bladder tumours represent a heterogeneous group of cancers. The natural history of these bladder cancers is that of recurrence of disease and progression to higher grade and stage disease. Furthermore, recurrence and progression rates of superficial bladder cancer vary according to several tumour characteristics, mainly tumour grade and stage. The most recent World Health Organization (WHO) classification of tumours of the urinary system includes urothelial flat lesions: flat hyperplasia, dysplasia and carcinoma in situ. The papillary lesions are broadly subdivided into benign (papilloma and inverted papilloma), papillary urothelial neoplasia of low malignant potential (PUNLMP) and non-invasive papillary carcinoma (low or high grade). The initial proposal of the 2004 WHO has been achieved, with most reports supporting that categories are better defined than in previous classifications. An additional important issue is that PUNLMP, the most controversial proposal of the WHO in 2004, has lower malignant behaviour than low-grade carcinoma. Whether PUNLMP remains a clinically useful category, or whether this category should be expanded to include all low-grade, stage Ta lesions (PUNLMP and low-grade papillary carcinoma) as a wider category of less aggressive tumours not labelled as cancer, needs to be discussed in the near future. This article summarizes the recent literature concerning important issues in the pathology and the clinical management of patients with bladder urothelial carcinoma. Emphasis is placed on clinical presentation, the significance of haematuria, macroscopic appearance (papillary, solid or mixed, single or multiple) and synchronous or metachronous presentation (field disease vs monoclonal disease with seeding), classification and microscopic variations of bladder cancer with clinical significance, TNM distribution and the pathological grading according to the 2004 WHO proposal.", "title": "" }, { "docid": "6b9663085968c5483c9a2871b4807524", "text": "E-Commerce is one of the crucial trading methods worldwide. Hence, it is important to understand consumers’ online purchase intention. This research aims to examine factors that influence consumers’ online purchase intention among university students in Malaysia. Quantitative research approach has been adapted in this research by distributing online questionnaires to 250 Malaysian university students aged between 20-29 years old, who possess experience in online purchases. Findings of this research have discovered that trust, perceived usefulness and subjective norm are the significant factors in predicting online purchase intention. However, perceived ease of use and perceived enjoyment are not significant in predicting the variance in online purchase intention. The findings also revealed that subjective norm is the most significant predicting factor on online purchase intention among university students in Malaysia. Findings of this research will provide online marketers with a better understanding on online purchase intention which enable them to direct effective online marketing strategies.", "title": "" }, { "docid": "e715b87fc145d80dbab179abcc85c14b", "text": "This paper proposes an efficient multi-view 3D reconstruction method based on randomization and propagation scheme. Our method progressively refines a 3D model of a given scene by randomly perturbing the initial guess of 3D points and propagating photo-consistent ones to their neighbors. While finding local optima is an ordinary method for better photo-consistency, our randomization and propagation takes lucky matchings to spread better points replacing old ones for reducing the computational complexity. Experiments show favorable efficiency of the proposed method accompanied by competitive accuracy with the state-of-the-art methods.", "title": "" }, { "docid": "4d405c1c2919be01209b820f61876d57", "text": "This paper presents a single-pole eight-throw switch, based on an eight-way power divider, using substrate integrate waveguide(SIW) technology. Eight sectorial-lines are formed by inserting radial slot-lines on the top plate of SIW power divider. Each sectorial-line can be controlled independently with high level of isolation. The switching is accomplished by altering the capacitance of the varactor on the line, which causes different input impedances to be seen at a central probe to each sectorial line. The proposed structure works as a switching circuit and an eight-way power divider depending on the bias condition. The change in resonant frequency and input impedance are estimated by adapting a tapered transmission line model. The detailed design, fabrication, and measurement are discussed.", "title": "" }, { "docid": "60c976cb53d5128039e752e5f797f110", "text": "This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.", "title": "" }, { "docid": "7ef3829b1fab59c50f08265d7f4e0132", "text": "Muscle glycogen is the predominant energy source for soccer match play, though its importance for soccer training (where lower loads are observed) is not well known. In an attempt to better inform carbohydrate (CHO) guidelines, we quantified training load in English Premier League soccer players (n = 12) during a one-, two- and three-game week schedule (weekly training frequency was four, four and two, respectively). In a one-game week, training load was progressively reduced (P < 0.05) in 3 days prior to match day (total distance = 5223 ± 406, 3097 ± 149 and 2912 ± 192 m for day 1, 2 and 3, respectively). Whilst daily training load and periodisation was similar in the one- and two-game weeks, total accumulative distance (inclusive of both match and training load) was higher in a two-game week (32.5 ± 4.1 km) versus one-game week (25.9 ± 2 km). In contrast, daily training total distance was lower in the three-game week (2422 ± 251 m) versus the one- and two-game weeks, though accumulative weekly distance was highest in this week (35.5 ± 2.4 km) and more time (P < 0.05) was spent in speed zones >14.4 km · h(-1) (14%, 18% and 23% in the one-, two- and three-game weeks, respectively). Considering that high CHO availability improves physical match performance but high CHO availability attenuates molecular pathways regulating training adaptation (especially considering the low daily customary loads reported here, e.g., 3-5 km per day), we suggest daily CHO intake should be periodised according to weekly training and match schedules.", "title": "" } ]
scidocsrr
997c7a0a0c6b3e15401e4b6389e86150
Trading with optimized uptrend and downtrend pattern templates using a genetic algorithm kernel
[ { "docid": "c35fa79bd405ec0fb6689d395929c055", "text": "This study examines the potential profit of bull flag technical trading rules using a template matching technique based on pattern recognition for the Nasdaq Composite Index (NASDAQ) and Taiwan Weighted Index (TWI). To minimize measurement error due to data snooping, this study performed a series of experiments to test the effectiveness of the proposed method. The empirical results indicated that all of the technical trading rules correctly predict the direction of changes in the NASDAQ and TWI. This finding may provide investors with important information on asset allocation. Moreover, better bull flag template price fit is associated with higher average return. The empirical results demonstrated that the average return of trading rules conditioned on bull flag significantly better than buying every day for the study period, especially for TWI. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1c31f32f43819b2baad003c344cde1b1", "text": "One of the major duties of financial analysts is technical analysis. It is necessary to locate the technical patterns in the stock price movement charts to analyze the market behavior. Indeed, there are two main problems: how to define those preferred patterns (technical patterns) for query and how to match the defined pattern templates in different resolutions. As we can see, defining the similarity between time series (or time series subsequences) is of fundamental importance. By identifying the perceptually important points (PIPs) directly from the time domain, time series and templates of different lengths can be compared. Three ways of distance measure, including Euclidean distance (PIP-ED), perpendicular distance (PIP-PD) and vertical distance (PIP-VD), for PIP identification are compared in this paper. After the PIP identification process, both templateand rule-based pattern-matching approaches are introduced. The proposed methods are distinctive in their intuitiveness, making them particularly user friendly to ordinary data analysts like stock market investors. As demonstrated by the experiments, the templateand the rule-based time series matching and subsequence searching approaches provide different directions to achieve the goal of pattern identification. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6d8908ddf475d6571574aa4fd25ec3fe", "text": "In this case study in knowledge engineering and data mining, we implement a recognizer for two variations of thèbull ¯ag' technical charting heuristic and use this recognizer to discover trading rules on the NYSE Composite Index. Out-of-sample results indicate that these rules are effective. q 2002 Elsevier Science Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "dac5090c367ef05c8863da9c7979a619", "text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.", "title": "" }, { "docid": "66ad5e67a06504b1062316c3e3bbc5cf", "text": "We investigate the community structure of physics subfields in the citation network of all Physical Review publications between 1893 and August 2007. We focus on well-cited publications (those receiving more than 100 citations), and apply modularity maximization to uncover major communities that correspond to clearly identifiable subfields of physics. While most of the links between communities connect those with obvious intellectual overlap, there sometimes exist unexpected connections between disparate fields due to the development of a widely applicable theoretical technique or by cross fertilization between theory and experiment. We also examine communities decade by decade and also uncover a small number of significant links between communities that are widely separated in time. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a25bd124c29b9ca41f794e327d822a91", "text": "SUMO is an open source traffic simulation package including the simulation application itself as well as supporting tools, mainly for network import and demand modeling. SUMO helps to investigate a large variety of research topics, mainly in the context of traffic management and vehicular communications. We describe the current state of the package, its major applications, both by research topic and by example, as well as future developments and extensions. Keywords-microscopic traffic simulation; traffic management; open source; software", "title": "" }, { "docid": "5f6e77c95d92c1b8f571921954f252d6", "text": "Parallel job scheduling has gained increasing recognition in recent years as a distinct area of study. However , there is concern about the divergence of theory and practice in the eld. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the eld.", "title": "" }, { "docid": "d4a060243a2bf27f88e8893946e838b9", "text": "The phylogenetic relationships of the alpheid shrimp genera Betaeus (Dana, 1852) (15 species) and Betaeopsis (Yaldwyn, 1971) (three species), collectively known as hooded shrimps, are analyzed with morphological, molecular (16S and H3) and combined \"total evidence\" (morphology+DNA) datasets. The tree topology resulting from morphological and combined analyses places Betaeus jucundus as sister to all the remaining species of Betaeus and Betaeopsis, rendering Betaeus paraphyletic. On the other hand, Betaeopsis is recovered as monophyletic. Betaeus australis is positioned as sister to the remaining species of Betaeus s. str. (excluding B. jucundus), which is composed of three well-supported and resolved clades. Mapping of biogeographic traits on the combined tree suggests at least two possible historic scenarios. In the first scenario, the North-East Pacific harboring the highest diversity of hooded shrimps (seven species of Betaeus), acted as the \"center of origin\", where species appeared, matured and eventually migrated toward peripheral regions. In the second scenario, Betaeus+Betaeopsis originated in the southern Indo-West Pacific and subsequently colonized the North-East Pacific, where a major radiation involving dispersal/vicariance events took place. The mapping of life history traits (symbiosis vs. free living and gregariousness vs. single/pair living) in the combined tree suggests (1) that different types of symbioses with dissimilar host organisms (sea urchins, abalones, other decapods, spoon worms) evolved independently more than once in the group (in B. jucundus and in various lineages of Betaeus s. str.), and (2) that gregariousness was ancestral in the Betaeus s. str. -Betaeopsis clade and later shifted toward single/pair living in several lineages.", "title": "" }, { "docid": "5e94e30719ac09e86aaa50d9ab4ad57b", "text": "Blogs, regularly updated online journals, allow people to quickly and easily create and share online content. Most bloggers write about their everyday lives and generally have a small audience of regular readers. Readers interact with bloggers by contributing comments in response to specific blog posts. Moreover, readers of blogs are often bloggers themselves and acknowledge their favorite blogs by adding them to their blogrolls or linking to them in their posts. This paper presents a study of bloggers’ online and real life relationships in three blog communities: Kuwait Blogs, Dallas/Fort Worth Blogs, and United Arab Emirates Blogs. Through a comparative analysis of the social network structures created by blogrolls and blog comments, we find different characteristics for different kinds of links. Our online survey of the three communities reveals that few of the blogging interactions reflect close offline relationships, and moreover that many online relationships were formed through blogging.", "title": "" }, { "docid": "7eeb2bf2aaca786299ebc8507482e109", "text": "In this paper we argue that questionanswering (QA) over technical domains is distinctly different from TREC-based QA or Web-based QA and it cannot benefit from data-intensive approaches. Technical questions arise in situations where concrete problems require specific answers and explanations. Finding a justification of the answer in the context of the document is essential if we have to solve a real-world problem. We show that NLP techniques can be used successfully in technical domains for high-precision access to information stored in documents. We present ExtrAns, an answer extraction system over technical domains, its architecture, its use of logical forms for answer extractions and how terminology extraction becomes an important part of the system.", "title": "" }, { "docid": "50fe419f19754991e4356212c4fe2fab", "text": "In a recent book (Stanovich, 2004), I spent a considerable effort trying to work out the implications of dual process theory for the great rationality debate in cognitive science (see Cohen, 1981; Gigerenzer, 1996; Kahneman and Tversky, 1996; Stanovich, 1999; Stein, 1996). In this chapter, I wish to advance that discussion, first by discussing additions and complications to dual-process theory and then by working through the implications of these ideas for our view of human rationality.", "title": "" }, { "docid": "64c6012d2e97a1059161c295ae3b9cdb", "text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.", "title": "" }, { "docid": "d48430f65d844c92661d3eb389cdb2f2", "text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.", "title": "" }, { "docid": "bdc9bc09af90bd85f64c79cbca766b61", "text": "The inhalation route is frequently used to administer drugs for the management of respiratory diseases such as asthma or chronic obstructive pulmonary disease. Compared with other routes of administration, inhalation offers a number of advantages in the treatment of these diseases. For example, via inhalation, a drug is directly delivered to the target organ, conferring high pulmonary drug concentrations and low systemic drug concentrations. Therefore, drug inhalation is typically associated with high pulmonary efficacy and minimal systemic side effects. The lung, as a target, represents an organ with a complex structure and multiple pulmonary-specific pharmacokinetic processes, including (1) drug particle/droplet deposition; (2) pulmonary drug dissolution; (3) mucociliary and macrophage clearance; (4) absorption to lung tissue; (5) pulmonary tissue retention and tissue metabolism; and (6) absorptive drug clearance to the systemic perfusion. In this review, we describe these pharmacokinetic processes and explain how they may be influenced by drug-, formulation- and device-, and patient-related factors. Furthermore, we highlight the complex interplay between these processes and describe, using the examples of inhaled albuterol, fluticasone propionate, budesonide, and olodaterol, how various sequential or parallel pulmonary processes should be considered in order to comprehend the pulmonary fate of inhaled drugs.", "title": "" }, { "docid": "98b6da9a1ab53b94c50a98b25cdf2da4", "text": "There are many thousands of hereditary diseases in humans, each of which has a specific combination of phenotypic features, but computational analysis of phenotypic data has been hampered by lack of adequate computational data structures. Therefore, we have developed a Human Phenotype Ontology (HPO) with over 8000 terms representing individual phenotypic anomalies and have annotated all clinical entries in Online Mendelian Inheritance in Man with the terms of the HPO. We show that the HPO is able to capture phenotypic similarities between diseases in a useful and highly significant fashion.", "title": "" }, { "docid": "11913ec11f39eb944f5ffde3ac727268", "text": "Shared-memory multiprocessors are frequently used in a time-sharing style with multiple parallel applications executing at the same time. In such an environment, where the machine load is continuously varying, the question arises of how an application should maximize its performance while being fair to other users of the system. In this paper, we address this issue. We first show that if the number of runnable processes belonging to a parallel application significantly exceeds the effective number of physical processors executing it, its performance can be significantly degraded. We then propose a way of controlling the number of runnable processes associated with an application dynamically, to ensure good performance. The optimal number of runnable processes for each application is determined by a centralized server, and applications dynamically suspend or resume processes in order to match that number. A preliminary implementation of the proposed scheme is now running on the Encore Multimax and we show how it helps improve the performance of several applications. In some cases the improvement is more than a factor of two. We also discuss implications of the proposed scheme for multiprocessor schedulers, and how the scheme should interface with parallel programming languages.", "title": "" }, { "docid": "6a68383137a2b4041a251ae2c12d2710", "text": "Stochastic natural language generation systems that are trained from labelled datasets are often domainspecific in their annotation and in their mapping from semantic input representations to lexical-syntactic outputs. As a result, learnt models fail to generalize across domains, heavily restricting their usability beyond single applications. In this article, we focus on the problem of domain adaptation for natural language generation. We show how linguistic knowledge from a source domain, for which labelled data is available, can be adapted to a target domain by reusing training data across domains. As a key to this, we propose to employ abstract meaning representations as a common semantic representation across domains. We model natural language generation as a long short-term memory recurrent neural network encoderdecoder, in which one recurrent neural network learns a latent representation of a semantic input, and a second recurrent neural network learns to decode it to a sequence of words. We show that the learnt representations can be transferred across domains and can be leveraged effectively to improve training on new unseen domains. Experiments in three different domains and with six datasets demonstrate that the lexical-syntactic constructions learnt in one domain can be transferred to new domains and achieve up to 75-100% of the performance of in-domain training. This is based on objective metrics such as BLEU and semantic error rate and a subjective human rating study. Training a policy from prior knowledge from a different domain is consistently better than pure in-domain training by up to 10%.", "title": "" }, { "docid": "957863eafec491fae0710dd33c043ba8", "text": "In this paper, we present an automated behavior analysis system developed to assist the elderly and individuals with disabilities who live alone, by learning and predicting standard behaviors to improve the efficiency of their healthcare. Established behavioral patterns have been recorded using wireless sensor networks composed by several event-based sensors that captured raw measures of the actions of each user. Using these data, behavioral patterns of the residents were extracted using Bayesian statistics. The behavior was statistically estimated based on three probabilistic features we introduce, namely sensor activation likelihood, sensor sequence likelihood, and sensor event duration likelihood. Real data obtained from different home environments were used to verify the proposed method in the individual analysis. The results suggest that the monitoring system can be used to detect anomalous behavior signs which could reflect changes in health status of the user, thus offering an opportunity to intervene if required.", "title": "" }, { "docid": "528812aa635d6b9f0b65cc784fb256e1", "text": "Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.", "title": "" }, { "docid": "8ef2ab1c25af8290e7f6492fbcfb4321", "text": "This chapter discusses the topic of Goal Reasoning and its relation to Trusted Autonomy. Goal Reasoning studies how autonomous agents can extend their reasoning capabilities beyond their plans and actions, to consider their goals. Such capability allows a Goal Reasoning system to more intelligently react to unexpected events or changes in the environment. We present two different models of Goal Reasoning: Goal-Driven Autonomy (GDA) and goal refinement. We then discuss several research topics related to each, and how they relate to the topic of Trusted Autonomy. Finally, we discuss several directions of ongoing work that are particularly interesting in the context of the chapter: using a model of inverse trust as a basis for adaptive autonomy, and studying how Goal Reasoning agents may choose to rebel (i.e., act contrary to a given command). Benjamin Johnson NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: benjamin.johnson.ctr@nrl.navy.mil Michael W. Floyd Knexus Research Corporation; Springfield, VA; USA e-mail: michael.floyd@knexusresearch.com Alexandra Coman NRC Research Associate at the US Naval Research Laboratory; Washington, DC; USA e-mail: alexandra.coman.ctr.ro@nrl.navy.mil Mark A. Wilson Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: mark.wilson@nrl.navy.mil David W. Aha Navy Center for Applied Research in AI, US Naval Research Laboratory; Washington, DC; USA e-mail: david.aha@nrl.navy.mil", "title": "" }, { "docid": "fa04415325731a0f1b80a93d2e434c80", "text": "Human gait is an attractive modality for recognizing people at a distance. In this paper we adopt an appearance-basedapproach to the problem of gait recognition. The width of the outer contour of the binarized silhouette of a walking person is chosen as the basic image feature. Different gait features are extracted from the width vector such as the dowsampled, smoothed width vectors, the velocity profile etc. and sequences of such temporally ordered feature vectors are used for representing a person’s gait. We use the dynamic time-warping (DTW) approach for matching so that non-linear time normalization may be used to deal with the naturally-occuring changes in walking speed. The performance of the proposed method is tested using different gait databases.", "title": "" }, { "docid": "add36ca538a8ae362c0224acfa020700", "text": "A frustrating aspect of software development is that compiler error messages often fail to locate the actual cause of a syntax error. An errant semicolon or brace can result in many errors reported throughout the file. We seek to find the actual source of these syntax errors by relying on the consistency of software: valid source code is usually repetitive and unsurprising. We exploit this consistency by constructing a simple N-gram language model of lexed source code tokens. We implemented an automatic Java syntax-error locator using the corpus of the project itself and evaluated its performance on mutated source code from several projects. Our tool, trained on the past versions of a project, can effectively augment the syntax error locations produced by the native compiler. Thus we provide a methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser.", "title": "" } ]
scidocsrr
a81d77eda544c85154fef5117434757b
Algorithms for orthogonal nonnegative matrix factorization
[ { "docid": "570bc6b72db11c32292f705378042089", "text": "In this paper, we propose a novel method, called local nonnegative matrix factorization (LNMF), for learning spatially localized, parts-based subspace representation of visual patterns. An objective function is defined to impose localization constraint, in addition to the non-negativity constraint in the standard NMF [1]. This gives a set of bases which not only allows a non-subtractive (part-based) representation of images but also manifests localized features. An algorithm is presented for the learning of such basis components. Experimental results are presented to compare LNMF with the NMF and PCA methods for face representation and recognition, which demonstrates advantages of LNMF.", "title": "" }, { "docid": "c00e78121637ee9bcf1640c41204afd0", "text": "In this paper we present a methodology for analyzing polyphonic musical passages comprised by notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.", "title": "" } ]
[ { "docid": "99faeab3adcf89a3f966b87547cea4e7", "text": "In-service structural health monitoring of composite aircraft structures plays a key role in the assessment of their performance and integrity. In recent years, Fibre Optic Sensors (FOS) have proved to be a potentially excellent technique for real-time in-situ monitoring of these structures due to their numerous advantages, such as immunity to electromagnetic interference, small size, light weight, durability, and high bandwidth, which allows a great number of sensors to operate in the same system, and the possibility to be integrated within the material. However, more effort is still needed to bring the technology to a fully mature readiness level. In this paper, recent research and applications in structural health monitoring of composite aircraft structures using FOS have been critically reviewed, considering both the multi-point and distributed sensing techniques.", "title": "" }, { "docid": "51eb0e35baa92a85a620b9bf15cbfca0", "text": "The detection of bad weather conditions is crucial for meteorological centers, specially with demand for air, sea and ground traffic management. In this article, a system based on computer vision is presented which detects the presence of rain or snow. To separate the foreground from the background in image sequences, a classical Gaussian Mixture Model is used. The foreground model serves to detect rain and snow, since these are dynamic weather phenomena. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations of rain or snow Streaks (HOS), estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian-uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. When the presence of rain or of snow is detected, the HOS makes it possible to detect the pixels of rain or of snow in the foreground images, and to estimate the intensity of the precipitation of rain or of snow. The applications of the method are numerous and include the detection of critical weather conditions, the observation of weather, the reliability improvement of video-surveillance systems and rain rendering.", "title": "" }, { "docid": "4f5bc7305614149ff9dc178d60bba721", "text": "Love is a wondrous state, deep, tender, and rewarding. Because of its intimate and personal nature it is regarded by some as an improper topic for experimental research. But, whatever our personal feelings may be, our assigned mission as psychologists is to analyze all facets of human and animal behavior into their component variables. So far as love or affection is concerned, psychologists have failed in this mission. The little we know about love does not transcend simple observation, and the little we write about it has been written better by poets and novelists. But of greater concern is the fact that psychologists tend to give progressively less attention to a motive which pervades our entire lives. Psychologists, at least psychologists who write textbooks, not only show no interest in the origin and development of love or affection, but they seem to be unaware of its very existence.", "title": "" }, { "docid": "f6b49f33720ef789cf085a5ab8154ed4", "text": "Several artificial neural network (ANN) models with a feed-forward, back-propagation network structure and various training algorithms, are developed to forecast daily and monthly river flow discharges in Manwan Reservoir. In order to test the applicability of these models, they are compared with a conventional time series flow prediction model. Results indicate that the ANN models provide better accuracy in forecasting river flow than does the auto-regression time series model. In particular, the scaled conjugate gradient algorithm furnishes the highest correlation coefficient and the smallest root mean square error. This ANN model is finally employed in the advanced water resource project of Yunnan Power Group.", "title": "" }, { "docid": "028070222acb092767aadfdd6824d0df", "text": "The autism spectrum disorders (ASDs) are a group of conditions characterized by impairments in reciprocal social interaction and communication, and the presence of restricted and repetitive behaviours. Individuals with an ASD vary greatly in cognitive development, which can range from above average to intellectual disability. Although ASDs are known to be highly heritable (∼90%), the underlying genetic determinants are still largely unknown. Here we analysed the genome-wide characteristics of rare (<1% frequency) copy number variation in ASD using dense genotyping arrays. When comparing 996 ASD individuals of European ancestry to 1,287 matched controls, cases were found to carry a higher global burden of rare, genic copy number variants (CNVs) (1.19 fold, P = 0.012), especially so for loci previously implicated in either ASD and/or intellectual disability (1.69 fold, P = 3.4 × 10-4). Among the CNVs there were numerous de novo and inherited events, sometimes in combination in a given family, implicating many novel ASD genes such as SHANK2, SYNGAP1, DLGAP2 and the X-linked DDX53–PTCHD1 locus. We also discovered an enrichment of CNVs disrupting functional gene sets involved in cellular proliferation, projection and motility, and GTPase/Ras signalling. Our results reveal many new genetic and functional targets in ASD that may lead to final connected pathways.", "title": "" }, { "docid": "d68147bf8637543adf3053689de740c3", "text": "In this paper, we do a research on the keyword extraction method of news articles. We build a candidate keywords graph model based on the basic idea of TextRank, use Word2Vec to calculate the similarity between words as transition probability of nodes' weight, calculate the word score by iterative method and pick the top N of the candidate keywords as the final results. Experimental results show that the weighted TextRank algorithm with correlation of words can improve performance of keyword extraction generally.", "title": "" }, { "docid": "54722f4851707c2bf51d18910728a31c", "text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.", "title": "" }, { "docid": "d27d17176181b09a74c9c8115bc6a66e", "text": "In this chapter, we provide definitions of Business Intelligence (BI) and outline the development of BI over time, particularly carving out current questions of BI. Different scenarios of BI applications are considered and business perspectives and views of BI on the business process are identified. Further, the goals and tasks of BI are discussed from a management and analysis point of view and a method format for BI applications is proposed. This format also gives an outline of the book’s contents. Finally, examples from different domain areas are introduced which are used for demonstration in later chapters of the book. 1.1 Definition of Business Intelligence If one looks for a definition of the term Business Intelligence (BI) one will find the first reference already in 1958 in a paper of H.P. Luhn (cf. [14]). Starting from the definition of the terms “Intelligence” as “the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal” and “Business” as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera”, he specifies a business intelligence system as “[an] automatic system [that] is being developed to disseminate information to the various sections of any industrial, scientific or government organization.” The main task of Luhn’s system was automatic abstracting of documents and delivering this information to appropriate so-called action points. This definition did not come into effect for 30 years, and in 1989Howard Dresner coined the term Business Intelligence (BI) again. He introduced it as an umbrella term for a set of concepts and methods to improve business decision making, using systems based on facts. Many similar definitions have been given since. In Negash [18], important aspects of BI are emphasized by stating that “. . . business intelligence systems provide actionable information delivered at the right time, at the right location, and in the right form to assist decision makers.” Today one can find many different definitions which show that at the top level the intention of BI has not changed so much. For example, in [20] BI is defined as “an integrated, company-specific, IT-based total approach for managerial decision © Springer-Verlag Berlin Heidelberg 2015 W. Grossmann, S. Rinderle-Ma, Fundamentals of Business Intelligence, Data-Centric Systems and Applications, DOI 10.1007/978-3-662-46531-8_1 1", "title": "" }, { "docid": "3e845c9a82ef88c7a1f4447d57e35a3e", "text": "Link prediction is a key problem for network-structured data. Link prediction heuristics use some score functions, such as common neighbors and Katz index, to measure the likelihood of links. They have obtained wide practical uses due to their simplicity, interpretability, and for some of them, scalability. However, every heuristic has a strong assumption on when two nodes are likely to link, which limits their effectiveness on networks where these assumptions fail. In this regard, a more reasonable way should be learning a suitable heuristic from a given network instead of using predefined ones. By extracting a local subgraph around each target link, we aim to learn a function mapping the subgraph patterns to link existence, thus automatically learning a “heuristic” that suits the current network. In this paper, we study this heuristic learning paradigm for link prediction. First, we develop a novel γ-decaying heuristic theory. The theory unifies a wide range of heuristics in a single framework, and proves that all these heuristics can be well approximated from local subgraphs. Our results show that local subgraphs reserve rich information related to link existence. Second, based on the γ-decaying theory, we propose a new method to learn heuristics from local subgraphs using a graph neural network (GNN). Its experimental results show unprecedented performance, working consistently well on a wide range of problems.", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "038e48bcae7346ef03a318bb3a280bcc", "text": "Low back pain (LBP) is a problem worldwide with a lifetime prevalence reported to be as high as 84%. The lifetime prevalence of low back pain is reported to be as high as 84%, and the prevalence of chronic low back pain is about 23%, with 11–12% of the population being disabled by low back pain [1]. LBP is defined as pain experienced between the twelfth rib and the inferior gluteal fold, with or without associated leg pain [2]. Based on the etiology LBP is classified as Specific Low Back Pain and Non-specific Low Back Pain. Of all the LBP patients 10% are attributed to Specific and 90% are attributed to NonSpecific Low Back Pain (NSLBP) [3]. Specific LBP are those back pains which have specific etiology causes like Sponylolisthesis, Spondylosis, Ankylosing Spondylitis, Prolapsed disc etc.", "title": "" }, { "docid": "3f206b161dc55aea204dda594127bf3d", "text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.", "title": "" }, { "docid": "18aa98d42150adb110632b20118909e4", "text": "In recent times, 60 GHz millimeter wave systems have become increasingly attractive due to the escalating demand for multi-Gb/s wireless communication. Recent works have demonstrated the ability to realize a 60 GHz transceiver by means of a cost-effective CMOS process. This paper aims to give the most up-to-date status of the 60 GHz wireless transceiver development, with an emphasis on realizing low power consumption and small form factor that is applicable for mobile terminals. To make 60 GHz wireless more robust and ease of use in various applications, broadband propagation and interference characteristics are measured at the 60 GHz band in an application-oriented office environment, considering the concurrent use of multiple frequency channels and multiple terminals. Moreover, this paper gives an overview of future millimeter wave systems.", "title": "" }, { "docid": "6508fc8732fd22fde8c8ac180a2e19e3", "text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.", "title": "" }, { "docid": "50b316a52bdfacd5fe319818d0b22962", "text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.", "title": "" }, { "docid": "3cd19e73aade3e99fff4b213afd3c678", "text": "We describe the dialogue model for the virtual humans developed at the Institute for Creative Technologies at the University of Southern California. The dialogue model contains a rich set of information state and dialogue moves to allow a wide range of behaviour in multimodal, multiparty interaction. We extend this model to enable non-team negotiation, using ideas from social science literature on negotiation and implemented strategies and dialogue moves for this area. We present a virtual human doctor who uses this model to engage in multimodal negotiation dialogue with people from other organisations. The doctor is part of the SASO-ST system, used for training for non-team interactions.", "title": "" }, { "docid": "132bb5b7024de19f4160664edca4b4f5", "text": "Generic Competitive Strategy: Basically, strategy is about two things: deciding where you want your business to go, and deciding how to get there. A more complete definition is based on competitive advantage, the object of most corporate strategy: “Competitive advantage grows out of value a firm is able to create for its buyers that exceeds the firm's cost of creating it. Value is what buyers are willing to pay, and superior value stems from offering lower prices than competitors for equivalent benefits or providing unique benefits that more than offset a higher price. There are two basic types of competitive advantage: cost leadership and differentiation.” Michael Porter Competitive strategies involve taking offensive or defensive actions to create a defendable position in the industry. Generic strategies can help the organization to cope with the five competitive forces in the industry and do better than other organization in the industry. Generic strategies include ‘overall cost leadership’, ‘differentiation’, and ‘focus’. Generally firms pursue only one of the above generic strategies. However some firms make an effort to pursue only one of the above generic strategies. However some firms make an effort to pursue more than one strategy at a time by bringing out a differentiated product at low cost. Though approaches like these are successful in short term, they are hardly sustainable in the long term. If firms try to maintain cost leadership as well as differentiation at the same time, they may fail to achieve either.", "title": "" }, { "docid": "45ef23f40fd4241b58b8cb0810695785", "text": "Two-wheeled wheelchairs are considered highly nonlinear and complex systems. The systems mimic a double-inverted pendulum scenario and will provide better maneuverability in confined spaces and also to reach higher level of height for pick and place tasks. The challenge resides in modeling and control of the two-wheeled wheelchair to perform comparably to a normal four-wheeled wheelchair. Most common modeling techniques have been accomplished by researchers utilizing the basic Newton's Laws of motion and some have used 3D tools to model the system where the models are much more theoretical and quite far from the practical implementation. This article is aimed at closing the gap between the conventional mathematical modeling approaches where the integrated 3D modeling approach with validation on the actual hardware implementation was conducted. To achieve this, both nonlinear and a linearized model in terms of state space model were obtained from the mathematical model of the system for analysis and, thereafter, a 3D virtual prototype of the wheelchair was developed, simulated, and analyzed. This has increased the confidence level for the proposed platform and facilitated the actual hardware implementation of the two-wheeled wheelchair. Results show that the prototype developed and tested has successfully worked within the specific requirements established.", "title": "" }, { "docid": "f7b8956748e8c19468490f35ed764e4e", "text": "We show how the database community’s notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data-reduction tool; networking approaches, however, have focused on application specific solutions, whereas our innetwork aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and", "title": "" } ]
scidocsrr
88f3fe0dca0f76febdb3f4f42363cfae
Bitcoin Beacon
[ { "docid": "f2a66fb35153e7e10d93fac5c8d29374", "text": "A widespread security claim of the Bitcoin system, presented in the original Bitcoin white-paper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically awed assumptions. In the paper we analyze two kinds of attacks based on two theoretical aws: the Block Discarding Attack and the Di culty Raising Attack. We argue that the current theoretical limit of attacker's fraction of total computational power essential for the security of the system is in a sense not 1 2 but a bit less than 1 4 , and outline proposals for protocol change that can raise this limit to be as close to 1 2 as we want. The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major di erences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the rst time.", "title": "" } ]
[ { "docid": "9963e1f7126812d9111a4cb6a8eb8dc6", "text": "The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.", "title": "" }, { "docid": "e08990fec382e1ba5c089d8bc1629bc5", "text": "Goal-oriented spoken dialogue systems have been the most prominent component in todays virtual personal assistants, which allow users to speak naturally in order to finish tasks more efficiently. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. However, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge of the prior work and the recent state-of-the-art work. Therefore, this tutorial is designed to focus on an overview of dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges, in order to allow researchers to study the potential improvements of the state-of-the-art dialogue systems. The tutorial material is available at http://deepdialogue.miulab.tw. 1 Tutorial Overview With the rising trend of artificial intelligence, more and more devices have incorporated goal-oriented spoken dialogue systems. Among popular virtual personal assistants, Microsoft’s Cortana, Apple’s Siri, Amazon Alexa, and Google Assistant have incorporated dialogue system modules in various devices, which allow users to speak naturally in order to finish tasks more efficiently. Traditional conversational systems have rather complex and/or modular pipelines. The advancement of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. The goal of this tutorial is to provide the audience with the developing trend of dialogue systems, and a roadmap to get them started with the related work. The first section motivates the work on conversationbased intelligent agents, in which the core underlying system is task-oriented dialogue systems. The following section describes different approaches using deep learning for each component in the dialogue system and how it is evaluated. The last two sections focus on discussing the recent trends and current challenges on dialogue system technology and summarize the challenges and conclusions. The detailed content is described as follows. 2 Dialogue System Basics This section will motivate the work on conversation-based intelligent agents, in which the core underlying system is task-oriented spoken dialogue systems. The section starts with an overview of the standard pipeline framework for dialogue system illustrated in Figure 1 (Tur and De Mori, 2011). Basic components of a dialog system are automatic speech recognition (ASR), language understanding (LU), dialogue management (DM), and natural language generation (NLG) (Rudnicky et al., 1999; Zue et al., 2000; Zue and Glass, 2000). This tutorial will mainly focus on LU, DM, and NLG parts.", "title": "" }, { "docid": "60114bebc1b64a3bfd5dc010a1a4891c", "text": "Attachment anxiety is expected to be positively associated with dependence and self-criticism. However, attachment avoidance is expected to be negatively associated with dependence but positively associated with self-criticism. Both dependence and self-criticism are expected to be related to depressive symptoms. Data were analyzed from 424 undergraduate participants at a large Midwestern university, using structural equation modeling. Results indicated that the relation between attachment anxiety and depressive symptoms was fully mediated by dependence and self-criticism, whereas the relation between attachment avoidance and depressive symptoms was partially mediated by dependence and self-criticism. Moreover, through a multiple-group comparison analysis, the results indicated that men with high levels of attachment avoidance are more likely than women to be self-critical.", "title": "" }, { "docid": "2892a61cd6097e4bf1f580a0f36e8a9e", "text": "In this paper, a low-power full-band low-noise amplifier (FB-LNA) for ultra-wideband applications is presented. The proposed FB-LNA uses a stagger-tuning technique to extend the full bandwidth from 3.1 to 10.6 GHz. A current-reused architecture is employed to decrease the power consumption. By using an input common-gate stage, the input resistance of 50 Ω can be obtained without an extra input-matching network. The output matching is achieved by cascading an output common-drain stage. FB-LNA was implemented with a TSMC 0.18-μm CMOS process. On-wafer measurement shows an average power gain of 9.7 dB within the full operation band. The input reflection coefficient and the output reflection coefficient are both less than -10 dB over the entire band. The noise figure of the full band remained under 7 dB with a minimum value of 5.27 dB. The linearity of input third-order intercept point is -2.23 dBm. The power consumptions at 1.5-V supply voltage without an output buffer is 4.5 mW. The chip area occupies 1.17 × 0.88 mm2.", "title": "" }, { "docid": "a412cff5999d0c257562335465a28323", "text": "In transfer learning, what and how to transfer are two primary issues to be addressed, as different transfer learning algorithms applied between a source and a target domain result in different knowledge transferred and thereby the performance improvement in the target domain. Determining the optimal one that maximizes the performance improvement requires either exhaustive exploration or considerable expertise. Meanwhile, it is widely accepted in educational psychology that human beings improve transfer learning skills of deciding what to transfer through meta-cognitive reflection on inductive transfer learning practices. Motivated by this, we propose a novel transfer learning framework known as Learning to Transfer (L2T) to automatically determine what and how to transfer are the best by leveraging previous transfer learning experiences. We establish the L2T framework in two stages: 1) we learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer are the best for a future pair of domains by optimizing the reflection function. We also theoretically analyse the algorithmic stability and generalization bound of L2T, and empirically demonstrate its superiority over several state-ofthe-art transfer learning algorithms.", "title": "" }, { "docid": "79910e1dadf52be1b278d2e57d9bdb9e", "text": "Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.", "title": "" }, { "docid": "ce48548c0004b074b18f95792f3e6ce8", "text": "In this paper, we study domain adaptation with a state-of-the-art hierarchical neural network for document-level sentiment classification. We first design a new auxiliary task based on sentiment scores of domain-independent words. We then propose two neural network architectures to respectively induce document embeddings and sentence embeddings that work well for different domains. When these document and sentence embeddings are used for sentiment classification, we find that with both pseudo and external sentiment lexicons, our proposed methods can perform similarly to or better than several highly competitive domain adaptation methods on a benchmark dataset of product reviews.", "title": "" }, { "docid": "63262d2a9abdca1d39e31d9937bb41cf", "text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.", "title": "" }, { "docid": "ab677299ffa1e6ae0f65daf5de75d66c", "text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.", "title": "" }, { "docid": "39c097ba72618ccc901e714b855d3048", "text": "In this paper we present a pattern for growth mindset development. We believe that students can be taught to positively change their mindset, where experience, training, and personal effort can add to a unique student's genetic endowment. We use our long years' experience and synthesized facilitation methods and techniques to assess insight mentoring and to improve it through growth mindset development. These can help students make creative changes in their life and see the world with new eyes in a new way. The pattern allows developing a growth mindset and improving our lives and the lives of those around us.", "title": "" }, { "docid": "ced688e5215ba23fd8bcb8c2ba8584d3", "text": "N2pc is generally interpreted as the electrocortical correlate of the distractor-suppression mechanisms through which attention selection takes place in humans. Here, we present data that challenge this common N2pc interpretation. In Experiment 1, multiple distractors induced greater N2pc amplitudes even when they facilitated target identification, despite the suppression account of the N2pc predicted the contrary; in Experiment 2, spatial proximity between target and distractors did not affect the N2pc amplitude, despite resulting in more interference in response times; in Experiment 3, heterogeneous distractors delayed response times but did not elicit a greater N2pc relative to homogeneous distractors again in contrast with what would have predicted the suppression hypothesis. These results do not support the notion that the N2pc unequivocally mirrors distractor-suppression processes. We propose that the N2pc indexes mechanisms involved in identifying and localizing relevant stimuli in the scene through enhancement of their features and not suppression of distractors.", "title": "" }, { "docid": "e8ac779e821b27e7cb7fb63716bc1024", "text": "Misogynist abuse has now become serious enough to attract attention from scholars of Law [7]. Social network platform providers have been forced to address this issue, such that Twitter is now very clear about what constitutes abusive behaviour, and has responded by updating their trust and safety rules [16].", "title": "" }, { "docid": "b06653abc5e287c72fc68247610ef76a", "text": "Radio Frequency Identification (RFID) is name given to technology that uses tags, readers and backend servers to form a system that has numerous applications in many areas, much discovered and rest are to be explored. Before implementing RFID system security issues must be considered carefully, taking not care of security issues could lead to severe consequences. This paper is overview of Introduction to RFID, RFID Fundamentals, basic structure of a RFID system, some of its numerous applications, security issues and their remedies.", "title": "" }, { "docid": "a37fa6118f4ff2e92977186ec7d5c3c6", "text": "The determination of prices is a key function of markets yet it is just beginning to be studied by sociologists. Most theories view prices as a consequence of economic processes. By contrast, we consider how social structure shapes prices. Building on embeddedness arguments and original fieldwork at large law firms, we propose that a firm’s embedded relationships influence prices by prompting private information flows and informal governance arrangements that add unique value to goods and services. We test our arguments with a separate longitudinal dataset on the pricing of legal services by law firms that represent corporate America. We find that embeddedness can significantly increase and decrease prices net of standard variables and in markets for both complex and routine legal services. Moreover, results show that three forms of embeddedness embedded ties, board memberships, and status affect prices in different directions and have different magnitudes of effects that depend on the complexity of the legal service.", "title": "" }, { "docid": "77ff4bd27b795212d355162822fc0cdc", "text": "We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.", "title": "" }, { "docid": "b4b66392aec0c4e00eb6b1cabbe22499", "text": "ADJ: Adjectives that occur with the NP CMC: Orthographic features of the NP CPL: Phrases that occur with the NP VERB: Verbs that appear with the NP Task: Predict whether a noun phrase (NP) belongs to a category (e.g. “city”) Category # Examples animal 20,733 beverage 18,932 bird 19,263 bodypart 21,840 city 21,778 disease 21,827 drug 20,452 fish 19,162 food 19,566 fruit 18,911 muscle 21,606 person 21,700 protein 21,811 river 21,723 vegetable 18,826", "title": "" }, { "docid": "88bc4f8a24a2e81a9c133d11a048ca10", "text": "In this paper, we give an overview of the HDF5 technology suite and some of its applications. We discuss the HDF5 data model, the HDF5 software architecture and some of its performance enhancing capabilities.", "title": "" }, { "docid": "c9b0954503fa8b6309a0736ac1a5cb62", "text": "Rigid Point Cloud Registration (PCReg) refers to the problem of finding the rigid transformation between two sets of point clouds. This problem is particularly important due to the advances in new 3D sensing hardware, and it is challenging because neither the correspondence nor the transformation parameters are known. Traditional local PCReg methods (e.g., ICP) rely on local optimization algorithms, which can get trapped in bad local minima in the presence of noise, outliers, bad initializations, etc. To alleviate these issues, this paper proposes Inverse Composition Discriminative Optimization (ICDO), an extension of Discriminative Optimization (DO), which learns a sequence of update steps from synthetic training data that search the parameter space for an improved solution. Unlike DO, ICDO is object-independent and generalizes even to unseen shapes. We evaluated ICDO on both synthetic and real data, and show that ICDO can match the speed and outperform the accuracy of state-of-the-art PCReg algorithms.", "title": "" }, { "docid": "d1bd5406b31cec137860a73b203d6bef", "text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.", "title": "" } ]
scidocsrr
80be253c6f3f2578e7b8c291ebf98f4b
Recent developments in human gait research: parameters, approaches, applications, machine learning techniques, datasets and challenges
[ { "docid": "c6e0843498747096ebdafd51d4b5cca6", "text": "The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.", "title": "" } ]
[ { "docid": "59dfaac9730e526604193f06b48a9dd5", "text": "We evaluated the functional and oncological outcome of ultralow anterior resection and coloanal anastomosis (CAA), which is a popular technique for preserving anal sphincter in patients with distal rectal cancer. Forty-eight patients were followed up for 6–100 months regarding fecal or gas incontinence, frequency of bowel movement, and local or systemic recurrence. The main operative techniques were total mesorectal excision with autonomic nerve preservation; the type of anastomosis was straight CAA, performed by the perianal hand sewn method in 38 cases and by the double-stapled method in 10. Postoperative complications included transient urinary retention (n=7), anastomotic stenosis (n=3), anastomotic leakage (n=3), rectovaginal fistula (n=2), and cancer positive margin (n=1; patient refused reoperation). Overall there were recurrences in seven patients (14.5%): one local and one systemic recurrence in stage B2; and one local, two systemic, and two combined local and systemic in C2. The mean frequency of bowel movements was 6.1 per day after 3 months, 4.4 after 1 year, and 3.1 after 2 years. The Kirwan grade for fecal incontinence was 2.7 after 3 months, 1.8 after 1 year, and 1.5 after 2 years. With careful selection of patients and good operative technique, CAA can be performed safely in distal rectal cancer. Normal continence and acceptable frequency of bowel movements can be obtained within 1 year after operation without compromising the rate of local recurrence.", "title": "" }, { "docid": "82a40130bc83a2456c8368fa9275c708", "text": "This paper presents a novel strategy for using ant colony optimization (ACO) to evolve the structure of deep recurrent neural networks. While versions of ACO for continuous parameter optimization have been previously used to train the weights of neural networks, to the authors’ knowledge they have not been used to actually design neural networks. The strategy presented is used to evolve deep neural networks with up to 5 hidden and 5 recurrent layers for the challenging task of predicting general aviation flight data, and is shown to provide improvements of 63 % for airspeed, a 97 % for altitude and 120 % for pitch over previously best published results, while at the same time not requiring additional input neurons for residual values. The strategy presented also has many benefits for neuro evolution, including the fact that it is easily parallizable and scalable, and can operate using any method for training neural networks. Further, the networks it evolves can typically be trained in fewer iterations than fully connected networks.", "title": "" }, { "docid": "f9f1cf949093c41a84f3af854a2c4a8b", "text": "Modern TCP implementations are capable of very high point-to-point bandwidths. Delivered performance on the fastest networks is often limited by the sending and receiving hosts, rather than by the network hardware or the TCP protocol implementation itself. In this case, systems can achieve higher bandwidth by reducing host overheads through a variety of optimizations above and below the TCP protocol stack, given support from the network interface. This paper surveys the most important of these optimizations, and illustrates their effects quantitatively with empirical results from a an experimental network delivering up to two gigabits per second of point-to-point TCP bandwidth.", "title": "" }, { "docid": "153f452486e2eacb9dc1cf95275dd015", "text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.", "title": "" }, { "docid": "f31ec6460f0e938f8e43f5b9be055aaf", "text": "Many people have turned to technological tools to help them be physically active. To better understand how goal-setting, rewards, self-monitoring, and sharing can encourage physical activity, we designed a mobile phone application and deployed it in a four-week field study (n=23). Participants found it beneficial to have secondary and primary weekly goals and to receive non-judgmental reminders. However, participants had problems with some features that are commonly used in practice and suggested in the literature. For example, trophies and ribbons failed to motivate most participants, which raises questions about how such rewards should be designed. A feature to post updates to a subset of their Facebook NewsFeed created some benefits, but barriers remained for most participants.", "title": "" }, { "docid": "1169d70de6d0c67f52ecac4d942d2224", "text": "All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis", "title": "" }, { "docid": "cdee51ab9562e56aee3fff58cd2143ba", "text": "Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.", "title": "" }, { "docid": "3baec781f7b5aaab8598c3628ea0af3b", "text": "Article history: Received 15 November 2010 Received in revised form 9 February 2012 Accepted 15 February 2012 Information professionals performing business activity related investigative analysis must routinely associate data from a diverse range of Web based general-interest business and financial information sources. XBRL has become an integral part of the financial data landscape. At the same time, Open Data initiatives have contributed relevant financial, economic, and business data to the pool of publicly available information on the Web but the use of XBRL in combination with Open Data remains at an early state of realisation. In this paper we argue that Linked Data technology, created for Web scale information integration, can accommodate XBRL data and make it easier to combine it with open datasets. This can provide the foundations for a global data ecosystem of interlinked and interoperable financial and business information with the potential to leverage XBRL beyond its current regulatory and disclosure role. We outline the uses of Linked Data technologies to facilitate XBRL consumption in conjunction with non-XBRL Open Data, report on current activities and highlight remaining challenges in terms of information consolidation faced by both XBRL and Web technologies. © 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d4ed4cad670b1e11cfb3c869e34cf9fd", "text": "BACKGROUND\nDespite the many antihypertensive medications available, two-thirds of patients with hypertension do not achieve blood pressure control. This is thought to be due to a combination of poor patient education, poor medication adherence, and \"clinical inertia.\" The present trial evaluates an intervention consisting of health coaching, home blood pressure monitoring, and home medication titration as a method to address these three causes of poor hypertension control.\n\n\nMETHODS/DESIGN\nThe randomized controlled trial will include 300 patients with poorly controlled hypertension. Participants will be recruited from a primary care clinic in a teaching hospital that primarily serves low-income populations.An intervention group of 150 participants will receive health coaching, home blood pressure monitoring, and home-titration of antihypertensive medications during 6 months. The control group (n=150) will receive health coaching plus home blood pressure monitoring for the same duration. A passive control group will receive usual care. Blood pressure measurements will take place at baseline, and after 6 and 12 months. The primary outcome will be change in systolic blood pressure after 6 and 12 months. Secondary outcomes measured will be change in diastolic blood pressure, adverse events, and patient and provider satisfaction.\n\n\nDISCUSSION\nThe present study is designed to assess whether the 3-pronged approach of health coaching, home blood pressure monitoring, and home medication titration can successfully improve blood pressure, and if so, whether this effect persists beyond the period of the intervention.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT01013857.", "title": "" }, { "docid": "c61b210036484009cf8077a803824695", "text": "Synthetic Aperture Radar (SAR) image is disturbed by multiplicative noise known as speckle. In this paper, based on the power of deep fully convolutional network, an encoding-decoding framework is introduced for multisource SAR image despeckling. The network contains a series of convolution and deconvolution layers, forming an end-to-end non-linear mapping between noise and clean SAR images. With addition of skip connection, the network can keep image details and accomplish the strategy for residual learning which solves the notorious problem of vanishing gradients and accelerates convergence. The experimental results on simulated and real SAR images show that the introduced approach achieves improvements in both despeckling performance and time efficiency over the state-of-the-art despeckling methods.", "title": "" }, { "docid": "8fb598f1f55f7a20bfc05865fc0a5efa", "text": "The detection of anomalous executions is valuable for reducing potential hazards in assistive manipulation. Multimodal sensory signals can be helpful for detecting a wide range of anomalies. However, the fusion of high-dimensional and heterogeneous modalities is a challenging problem for model-based anomaly detection. We introduce a long short-term memory-based variational autoencoder (LSTM-VAE) that fuses signals and reconstructs their expected distribution by introducing a progress-based varying prior. Our LSTM-VAE-based detector reports an anomaly when a reconstruction-based anomaly score is higher than a state-based threshold. For evaluations with 1555 robot-assisted feeding executions, including 12 representative types of anomalies, our detector had a higher area under the receiver operating characteristic curve of 0.8710 than 5 other baseline detectors from the literature. We also show the variational autoencoding and state-based thresholding are effective in detecting anomalies from 17 raw sensory signals without significant feature engineering effort.", "title": "" }, { "docid": "577c557bc6fcddcb51e962e68ed034ed", "text": "Text categorization is used to assign each text document to predefined categories. This paper presents a new text classification method for classifying Chinese text based on Rocchio algorithm. We firstly use the TFIDF to extract document vectors from the training documents which have been correctly categorized, and then use those document vectors to generate codebooks as classification models using the LBG and Rocchio algorithm. The codebook is then used to categorize the target documents using vector scores. We tested this method in the experiment and the result shows that this method can achieve better performance.", "title": "" }, { "docid": "d72652b6ad54422e6864baccc88786a8", "text": "Neisseria meningitidis is a major global pathogen that continues to cause endemic and epidemic human disease. Initial exposure typically occurs within the nasopharynx, where the bacteria can invade the mucosal epithelium, cause fulminant sepsis, and disseminate to the central nervous system, causing bacterial meningitis. Recently, Chamot-Rooke and colleagues1 described a unique virulence property of N. meningitidis in which the bacterial surface pili, after contact with host cells, undergo a modification that facilitates both systemic invasion and the spread of colonization to close contacts. Person-to-person spread of N. meningitidis can result in community epidemics of bacterial meningitis, with major consequences for public health. In resource-poor nations, cyclical outbreaks continue to result in high mortality and long-term disability, particularly in sub-Saharan Africa, where access to early diagnosis, antibiotic therapy, and vaccination is limited.2,3 An exclusively human pathogen, N. meningitidis uses several virulence factors to cause disease. Highly charged and hydrophilic capsular polysaccharides protect N. meningitidis from phagocytosis and complement-mediated bactericidal activity of the innate immune system. A family of proteins (called opacity proteins) on the bacterial outer membrane facilitate interactions with both epithelial and endothelial cells. These proteins are phase-variable — that is, the genome of the bacterium encodes related opacity proteins that are variably expressed, depending on environment, allowing the bacterium to adjust to rapidly changing environmental conditions. Lipooligosaccharide, analogous to the lipopolysaccharide of enteric gram-negative bacteria, contains a lipid A moiety with endotoxin activity that promotes the systemic sepsis encountered clinically. However, initial attachment to host cells is primarily mediated by filamentous organelles referred to as type IV pili, which are common to many bacterial pathogens and unique in their ability to undergo both antigenic and phase variation. Within hours of attachment to the host endothelial cell, N. meningitidis induces the formation of protrusions in the plasma membrane of host cells that aggregate the bacteria into microcolonies and facilitate pili-mediated contacts between bacteria and between bacteria and host cells. After attachment and aggregation, N. meningitidis detaches from the aggregates to systemically invade the host, by means of a transcellular pathway that crosses the respiratory epithelium,4 or becomes aerosolized and spreads the colonization of new hosts (Fig. 1). Chamot-Rooke et al. dissected the molecular mechanism underlying this critical step of systemic invasion and person-to-person spread and reported that pathogenesis depends on a unique post-translational modification of the type IV pili. Using whole-protein mass spectroscopy, electron microscopy, and molecular modeling, they showed that the major component of N. meningitidis type IV pili (called PilE or pilin) undergoes an unusual post-translational modification by phosphoglycerol. Expression of pilin phosphotransferase, the enzyme that transfers phosphoglycerol onto pilin, is increased within 4 hours of meningococcus contact with host cells and modifies the serine residue at amino acid position 93 of pilin, altering the charge of the pilin structure and thereby destabilizing the pili bundles, reducing bacterial aggregation, and promoting detachment from the cell surface. Strains of N. meningitidis in which phosphoglycerol modification of pilin occurred had a greatly enhanced ability to cross epithelial monolayers, a finding that supports the view that this virulence property, which causes deaggregation, promotes both transmission to new hosts and systemic invasion. Although this new molecular understanding of N. meningitidis virulence in humans is provoc-", "title": "" }, { "docid": "83f970bc22a2ada558aaf8f6a7b5a387", "text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b", "title": "" }, { "docid": "ab44369792f03c9d1a171789fca24001", "text": "High-speed actions are known to impact soccer performance and can be categorized into actions requiring maximal speed, acceleration, or agility. Contradictory findings have been reported as to the extent of the relationship between the different speed components. This study comprised 106 professional soccer players who were assessed for 10-m sprint (acceleration), flying 20-m sprint (maximum speed), and zigzag agility performance. Although performances in the three tests were all significantly correlated (p < 0.0005), coefficients of determination (r(2)) between the tests were just 39, 12, and 21% for acceleration and maximum speed, acceleration and agility, and maximum speed and agility, respectively. Based on the low coefficients of determination, it was concluded that acceleration, maximum speed, and agility are specific qualities and relatively unrelated to one another. The findings suggest that specific testing and training procedures for each speed component should be utilized when working with elite players.", "title": "" }, { "docid": "6d5429ddf4050724432da73af60274d6", "text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.", "title": "" }, { "docid": "055cb9aca6b16308793944154dc7866a", "text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?", "title": "" }, { "docid": "2d259ed5d3a1823da7cf54302d8ad1a6", "text": "We present Lynx-robot, a quadruped, modular, compliant machine. It alternately features a directly actuated, single-joint spine design, or an actively supported, passive compliant, multi-joint spine configuration. Both spine configurations bend in the sagittal plane. This study aims at characterizing these two, largely different spine concepts, for a bounding gait of a robot with a three segmented, pantograph leg design. An earlier, similar-sized, bounding, quadruped robot named Bobcat with a two-segment leg design and a directly actuated, single-joint spine design serves as a comparison robot, to study and compare the effect of the leg design on speed, while keeping the spine design fixed. Both proposed spine designs (single rotatory and active and multi-joint compliant) reach moderate, self-stable speeds.", "title": "" }, { "docid": "03966c28d31e1c45896eab46a1dcce57", "text": "For many applications it is useful to sample from a nite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M suuciently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be diicult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the algorithm itself. If the state space has a partial order that is preserved under the moves of the Markov chain, then the coupling is often particularly eecient. Using our approach one can sample from the Gibbs distributions associated with various statistical mechanics models (including Ising, random-cluster, ice, and dimer) or choose uniformly at random from the elements of a nite distributive lattice.", "title": "" } ]
scidocsrr
0b4fb56808fba1023fba71043909f1bf
Cohesion , Coherence , and Expert Evaluations of Writing Proficiency
[ { "docid": "6ee0c9832d82d6ada59025d1c7bb540e", "text": "Advances in computational linguistics and discourse processing have made it possible to automate many language- and text-processing mechanisms. We have developed a computer tool called Coh-Metrix, which analyzes texts on over 200 measures of cohesion, language, and readability. Its modules use lexicons, part-of-speech classifiers, syntactic parsers, templates, corpora, latent semantic analysis, and other components that are widely used in computational linguistics. After the user enters an English text, CohMetrix returns measures requested by the user. In addition, a facility allows the user to store the results of these analyses in data files (such as Text, Excel, and SPSS). Standard text readability formulas scale texts on difficulty by relying on word length and sentence length, whereas Coh-Metrix is sensitive to cohesion relations, world knowledge, and language and discourse characteristics.", "title": "" } ]
[ { "docid": "befd91b3e6874b91249d101f8373db01", "text": "Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/", "title": "" }, { "docid": "2d4101e4419d7f6a407c5673f74246e6", "text": "Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standard.", "title": "" }, { "docid": "068f9823d61804eba41d2b0bd2300a36", "text": "Congenital heart disease (CHD) occurs in 4-13 per 1000 births in the United States. While many risk factors for CHD have been identified, more than 90% of cases occur in low-risk patients. Guidelines for fetal cardiac screening during the second trimester anatomy ultrasound have been developed by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) in order to improve antenatal detection rates and to standardize the fetal cardiac screening examination. Patients found to be at increased risk of CHD because of risk factors or an abnormal screening examination should be referred for second trimester fetal echocardiography. Recently, 3D and 4D ultrasound techniques are being utilized to enhance detection rates and to better characterize cardiac lesions, and several first trimester ultrasound screening markers have been proposed to identify patients at increased risk of CHD. However, detection rates have not improved significantly due to limitations such as cost, access, and training that are associated with new technologies and screening methods. The most cost effective way to improve detection rates of CHD may be to standardize screening protocols across practices according to established guidelines and to have a low threshold for referral for fetal echocardiography.", "title": "" }, { "docid": "cd08d6df9730f56cc51adf799482d2a3", "text": "Recent advances in chemical composition and new production techniques resulted in improved biocompatibility and permeability of dialysis membranes. Among these, the creation of a new class of membranes called medium cut-off (MCO) represents an important step towards improvement of clinical outcomes. Such membranes have been developed to improve the clearance of medium to high molecular weight (MW) solutes (i.e. uraemic toxins in the range of 5-50 kDa). MCO membranes have peculiar retention onset and cut-off characteristics. Due to a modified sieving profile, MCO membranes have also been described as high-retention onset. The significant internal filtration achieved in MCO haemodialysers provides a remarkable convective clearance of medium to high MW solutes. The marginal loss of albumin observed in MCO membranes compared with high cut-off membranes is considered acceptable, if not beneficial, producing a certain clearance of protein-bound solutes. The application of MCO membranes in a classic dialysis modality characterizes a new technique called expanded haemodialysis. This therapy does not need specific software or dedicated hardware, making its application possible in every setting where the quality of dialysis fluid meets current standards.", "title": "" }, { "docid": "d32bdf27607455fb3416a4e3e3492f01", "text": "Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.", "title": "" }, { "docid": "05dc82e180514733bfc1f0bf5638178e", "text": "There is growing interest in improving the design of deep network architectures to be both accurate and low cost. This paper explores semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. This design is made possible by a soft gating mechanism that encourages component specialization during training and accurately performs component selection during inference. We evaluate the HydraNet approach on both the CIFAR-100 and ImageNet classification tasks. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4× while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5% when compared to an efficient baseline architecture with similar inference cost.", "title": "" }, { "docid": "1701da2aed094fdcbfaca6c2252d2e53", "text": "Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still inaccessible to standard cameras.", "title": "" }, { "docid": "5946378b291a1a0e1fb6df5cd57d716f", "text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)", "title": "" }, { "docid": "269e1c0d737beafd10560360049c6ee3", "text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.", "title": "" }, { "docid": "63b04046e1136290a97f885783dda3bd", "text": "This paper considers the design of secondary wireless mesh networks which use leased frequency channels. In a given geographic region, the available channels are individually priced and leased exclusively through a primary spectrum owner. The usage of each channel is also subject to published interference constraints so that the primary user is not adversely affected. When the network is designed and deployed, the secondary user would like to minimize the costs of using the required resources while satisfying its own traffic and interference requirements. This problem is formulated as a mixed integer optimization which gives the optimum deployment cost as a function of the secondary node positioning, routing, and frequency allocations. Because of the problem's complexity, the optimum result can only be found for small problem sizes. To accommodate more practical deployments, two algorithms are proposed and their performance is compared to solutions obtained from the optimization. The first algorithm is a greedy flow-based scheme (GFB) which iterates over the individual node flows based on solving a much simpler optimization at each step. The second algorithm (ILS) uses an iterated local search whose initial solution is based on constrained shortest path routing. Our results show that the proposed algorithms perform well for a variety of network scenarios.", "title": "" }, { "docid": "a0d1d59fc987d90e500b3963ac11b2ad", "text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "57e9467bfbc4e891acd00dcdac498e0e", "text": "Cross-cultural perspectives have brought renewed interest in the social aspects of the self and the extent to which individuals define themselves in terms of their relationships to others and to social groups. This article provides a conceptual review of research and theory of the social self, arguing that the personal, relational, and collective levels of self-definition represent distinct forms of selfrepresentation with different origins, sources of self-worth, and social motivations. A set of 3 experiments illustrates haw priming of the interpersonal or collective \"we\" can alter spontaneous judgments of similarity and self-descriptions.", "title": "" }, { "docid": "136a2f401b3af00f0f79b991ab65658f", "text": "Usage of online social business networks like LinkedIn and XING have become commonplace in today’s workplace. This research addresses the question of what factors drive the intention to use online social business networks. Theoretical frame of the study is the Technology Acceptance Model (TAM) and its extensions, most importantly the TAM2 model. Data has been collected via a Web Survey among users of LinkedIn and XING from January to April 2010. Of 541 initial responders 321 finished the questionnaire. Operationalization was tested using confirmatory factor analyses and causal hypotheses were evaluated by means of structural equation modeling. Core result is that the TAM2 model generally holds in the case of online social business network usage behavior, explaining 73% of the observed usage intention. This intention is most importantly driven by perceived usefulness, attitude towards usage and social norm, with the latter effecting both directly and indirectly over perceived usefulness. However, perceived ease of use has—contrary to hypothesis—no direct effect on the attitude towards usage of online social business networks. Social norm has a strong indirect influence via perceived usefulness on attitude and intention, creating a network effect for peer users. The results of this research provide implications for online social business network design and marketing. Customers seem to evaluate ease of use as an integral part of the usefulness of such a service which leads to a situation where it cannot be dealt with separately by a service provider. Furthermore, the strong direct impact of social norm implies application of viral and peerto-peer marketing techniques while it’s also strong indirect effect implies the presence of a network effect which stabilizes the ecosystem of online social business service vendors.", "title": "" }, { "docid": "b1d34115a1da9cfb349fc4690f54a82e", "text": "There are several theories available to describe how managers choose a medium for communication. However, current technology can affect not only how we communicate but also what we communicate. As a result, the issue for designers of communication support systems has become broader: how should technology be designed to make communication more effective by changing the medium and the attributes of the message itself? The answer to this question requires a shift from current preoccupations with the medium of 1Richard Watson was the accepting senior editor for this paper. 2MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html The associated web site for this paper is located at http://misq.org/misreview/teeni.shtml communication to a view that assesses the balance between medium and message form. There is also a need to look more closely at the process of communication in order to identify more precisely any potential areas of computer", "title": "" }, { "docid": "834a5cb9f2948443fbb48f274e02ca9c", "text": "The Carnegie Mellon Communicator is a telephone-based dialog system that supports planning in a travel domain. The implementation of such a system requires two complimentary components, an architecture capable of managing interaction and the task, as well as a knowledge base that captures the speech, language and task characteristics specific to the domain. Given a suitable architecture, the principal effort in development in taken up in the acquisition and processing of a domain knowledge base. This paper describes a variety of techniques we have applied to modeling in acoustic, language, task, generation and synthesis components of the system.", "title": "" }, { "docid": "b3881be74f7338038b53dc6ddfa1183d", "text": "Molecular chaperones, ubiquitin ligases and proteasome impairment have been implicated in several neurodegenerative diseases, including Alzheimer's and Parkinson's disease, which are characterized by accumulation of abnormal protein aggregates (e.g. tau and alpha-synuclein respectively). Here we report that CHIP, an ubiquitin ligase that interacts directly with Hsp70/90, induces ubiquitination of the microtubule associated protein, tau. CHIP also increases tau aggregation. Consistent with this observation, diverse of tau lesions in human postmortem tissue were found to be immunopositive for CHIP. Conversely, induction of Hsp70 through treatment with either geldanamycin or heat shock factor 1 leads to a decrease in tau steady-state levels and a selective reduction in detergent insoluble tau. Furthermore, 30-month-old mice overexpressing inducible Hsp70 show a significant reduction in tau levels. Together these data demonstrate that the Hsp70/CHIP chaperone system plays an important role in the regulation of tau turnover and the selective elimination of abnormal tau species. Hsp70/CHIP may therefore play an important role in the pathogenesis of tauopathies and also represents a potential therapeutic target.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "b3998d818b12e9dc376afea3094ae23f", "text": "1. Andrew Borthwick and Ralph Grishman. 1999. A maximum entropy approach to named entity recognition. Ph. D. Thesis, Dept. of Computer Science, New York University. 2. Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6645–6649. 3. Xuezhe Ma and Eduard Hovy. 2016. End-to-end se-quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). The Ohio State University", "title": "" }, { "docid": "cce06ee82633408b765fa6c373011a9d", "text": "Employees’ poor compliance with information security policies is a perennial problem. Current information security analysis methods do not allow information security managers to capture the rationalities behind employees’ compliance and non-compliance. To address this shortcoming, this design science research paper suggests: (a) a Value-Based Compliance analysis method and (b) a set of design principles for methods that analyse different rationalities for information security. Our empirical demonstration shows that the method supports a systematic analysis of why employees comply/do not comply with policies. Thus we provide managers with a tool to make them more knowledgeable about employees’ information security behaviours. 2016 Published by Elsevier B.V.", "title": "" }, { "docid": "b02782c0ce9512a0c1084bcb96a01636", "text": "OBJECTIVE\nRecently, public attention has focused on the possibility that social networking sites such as MySpace and Facebook are being widely used to sexually solicit underage youth, consequently increasing their vulnerability to sexual victimization. Beyond anecdotal accounts, however, whether victimization is more commonly reported in social networking sites is unknown.\n\n\nPARTICIPANTS AND METHODS\nThe Growing up With Media Survey is a national cross-sectional online survey of 1588 youth. Participants were 10- to 15-year-old youth who have used the Internet at least once in the last 6 months. The main outcome measures were unwanted sexual solicitation on the Internet, defined as unwanted requests to talk about sex, provide personal sexual information, and do something sexual, and Internet harassment, defined as rude or mean comments, or spreading of rumors.\n\n\nRESULTS\nFifteen percent of all of the youth reported an unwanted sexual solicitation online in the last year; 4% reported an incident on a social networking site specifically. Thirty-three percent reported an online harassment in the last year; 9% reported an incident on a social networking site specifically. Among targeted youth, solicitations were more commonly reported via instant messaging (43%) and in chat rooms (32%), and harassment was more commonly reported in instant messaging (55%) than through social networking sites (27% and 28%, respectively).\n\n\nCONCLUSIONS\nBroad claims of victimization risk, at least defined as unwanted sexual solicitation or harassment, associated with social networking sites do not seem justified. Prevention efforts may have a greater impact if they focus on the psychosocial problems of youth instead of a specific Internet application, including funding for online youth outreach programs, school antibullying programs, and online mental health services.", "title": "" } ]
scidocsrr
431d0aad73adf14c4053a7d0813468c4
Caroline: An Autonomously Driving Vehicle for Urban Environments
[ { "docid": "21f45ec969ba3852d731a2e2119fc86e", "text": "When a large number of people with heterogeneous knowledge and skills run a project together, it is important to use a sensible engineering process. This especially holds for a project building an intelligent autonomously driving car to participate in the 2007 DARPA Urban Challenge. In this article, we present essential elements of a software and systems engineering process for the development of artificial intelligence capable of driving autonomously in complex urban situations. The process includes agile concepts, like test first approach, continuous integration of every software module and a reliable release and configuration management assisted by software tools in integrated development environments. However, the most important ingredients for an efficient and stringent development are the ability to efficiently test the behavior of the developed system in a flexible and modular simulator for urban situations.", "title": "" }, { "docid": "84a187b1e5331c4e7eb349c8b1358f14", "text": "We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.", "title": "" } ]
[ { "docid": "31ad014dcc23db46555ea6a7e2bea764", "text": "This work presents a novel architecture of deep neural networks to generate meshes approximating the surface of a 3D object from a single image. Compared to existing learning-based 3D reconstruction models, our architecture is characterized by (1) deep mesh deformation stacks with residual network design, where a simple mesh is transformed to approximate the target surface and undergoes multiple deformation steps to progressively refine the result and reduce the residuals, and (2) parallel paths per deformation step, which can exponentially enrich the generated meshes using deeper structure and more model parameters. We also propose novel regularization scheme that encourages the meshes to be both globally complementary to cover the target surface and locally consistent with each other. Empirical evaluation on benchmark datasets show advantage of the proposed architecture over existing methods.", "title": "" }, { "docid": "afa8dcb9dfbd99781c4b03d80f9ad85c", "text": "Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning.", "title": "" }, { "docid": "fb70de7ed3e42c37b130686bfa3aee47", "text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.", "title": "" }, { "docid": "9546092b8db5d22448af61df5f725bbf", "text": "This paper provides a new equivalent circuit model for a spurline filter section in an inhomogeneous coupled-line medium whose even and odd mode phase velocities are unequal. This equivalent circuit permits the exact filter synthesis to be performed easily. Millimeter-wave filters at 26 to 40 GHz and 75 to 110 GHz have been fabricated using the model, and experimental results are included which validate the equivalent circuit model.", "title": "" }, { "docid": "2d4348b42befdc8c02d29617311c6377", "text": "Research on Smart Grids has recently focused on the energy monitoring issue, with the objective to maximize the user consumption awareness in building contexts on one hand, and to provide a detailed description of customer habits to the utilities on the other. One of the hottest topic in this field is represented by Non-Intrusive Load Monitoring (NILM): it refers to those techniques aimed at decomposing the consumption aggregated data acquired at a single point of measurement into the diverse consumption profiles of appliances operating in the electrical system under study. The focus here is on unsupervised algorithms, which are the most interesting and of practical use in real case scenarios. Indeed, these methods rely on a sustainable amount of a-priori knowledge related to the applicative context of interest, thus minimizing the user intervention to operate, and are targeted to extract all information to operate directly from the measured aggregate data. This paper reports and describes the most promising unsupervised NILM methods recently proposed in the literature, by dividing them into two main categories: load classification and source separation approaches. An overview of the public available dataset used on purpose and a comparative analysis of the algorithms performance is provided, together with a discussion of challenges and future research directions.", "title": "" }, { "docid": "fae925bdd47b835035d4f8f0b5b3139d", "text": "By Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin : Network Flows: Theory, Algorithms, and Applications bringing together the classic and the contemporary aspects of the field this comprehensive introduction to network flows provides an integrative view of theory network flows pearson new international edition theory algorithms and applications on amazon free shipping on qualifying offers Network Flows: Theory, Algorithms, and Applications:", "title": "" }, { "docid": "8c07232e73849116c070ffa2367e3c6f", "text": "Childhood apraxia of speech (CAS) is a motor speech disorder characterized by distorted phonemes, segmentation (increased segment and intersegment durations), and impaired production of lexical stress. This study investigated the efficacy of Treatment for Establishing Motor Program Organization (TEMPO) in nine participants (ages 5 to 8) using a delayed treatment group design. Children received four weeks of intervention for four days each week. Experimental probes were administered at baseline and posttreatment—both immediately and one month after treatment—for treated and untreated stimuli. Significant improvements in specific acoustic measures of segmentation and lexical stress were demonstrated following treatment for both the immediate and delayed treatment groups. Treatment effects for all variables were maintained at one-month post-treatment. These results support the demonstrated efficacy of TEMPO in improving the speech of children with CAS.", "title": "" }, { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "b4743e08bf9b20e3d82a77229cced73d", "text": "Spatial memory tasks, performance of which is known to be sensitive to hippocampal lesions in the rat, or to medial temporal lesions in the human, were administered in order to investigate the effects of selective damage to medial temporal lobe structures of the human brain. The patients had undergone thermo-coagulation with a single electrode along the amygdalo-hippocampal axis in an attempt to alleviate their epilepsy. With this surgical technique, lesions to single medial temporal lobe structures can be carried out. The locations of the lesions were assessed by means of digital high-resolution magnetic resonance imaging and software allowing a 3-D reconstruction of the brain. A break in the collateral sulcus, dividing it into the anterior collateral sulcus and the posterior collateral sulcus is reported. This division may correspond to the end of the entorhinal/perirhinal cortex and the start of the parahippocampal cortex. The results confirmed the role of the right hippocampus in visuo-spatial memory tasks (object location, Rey-Osterrieth Figure with and without delay) and the left for verbal memory tasks (Rey Auditory Verbal Learning Task with delay). However, patients with lesions either to the right or to the left hippocampus were unimpaired on several memory tasks, including a spatial one, with a 30 min delay, designed to be analogous to the Morris water maze. Patients with lesions to the right parahippocampal cortex were impaired on this task with a 30 min delay, suggesting that the parahippocampal cortex itself may play an important role in spatial memory.", "title": "" }, { "docid": "57ffea840501c5e9a77a2c7e0d609d07", "text": "Datasets power computer vison research and drive breakthroughs. Larger and larger datasets are needed to better utilize the exponentially increasing computing power. However, datasets generation is both time consuming and expensive as human beings are required for image labelling. Human labelling cannot scale well. How can we generate larger image datasets easier and faster? In this paper, we provide a new approach for large scale datasets generation. We generate images from 3D object models directly. The large volume of freely available 3D CAD models and mature computer graphics techniques make generating large scale image datasets from 3D models very efficient. As little human effort involved in this process, it can scale very well. Rather than releasing a static dataset, we will also provide a software library for dataset generation so that the computer vision community can easily extend or modify the datasets accordingly.", "title": "" }, { "docid": "b5d3c7822f2ba9ca89d474dda5f180b6", "text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.", "title": "" }, { "docid": "17ceaef57bfa8bf97a75f4f341c58783", "text": "Slip is the major cause of falls in human locomotion. We present a new bipedal modeling approach to capture and predict human walking locomotion with slips. Compared with the existing bipedal models, the proposed slip walking model includes the human foot rolling effects, the existence of the double-stance gait and active ankle joints. One of the major developments is the relaxation of the nonslip assumption that is used in the existing bipedal models. We conduct extensive experiments to optimize the gait profile parameters and to validate the proposed walking model with slips. The experimental results demonstrate that the model successfully predicts the human recovery gaits with slips.", "title": "" }, { "docid": "c949e051cbfd9cff13d939a7b594e6e6", "text": "Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks.", "title": "" }, { "docid": "605d2fed747be856d0ae47ddb559d177", "text": "Leukemia is a malignant neoplasm of the blood or bone marrow that affects both children and adults and remains a leading cause of death around the world. Acute lymphoblastic leukemia (ALL) is the most common type of leukemia and is more common among children and young adults. ALL diagnosis through microscopic examination of the peripheral blood and bone marrow tissue samples is performed by hematologists and has been an indispensable technique long since. However, such visual examinations of blood samples are often slow and are also limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the ALL diagnostic accuracy by analyzing morphological and textural features from the blood image using image processing. This paper aims at proposing a quantitative microscopic approach toward the discrimination of lymphoblasts (malignant) from lymphocytes (normal) in stained blood smear and bone marrow samples and to assist in the development of a computer-aided screening of ALL. Automated recognition of lymphoblasts is accomplished using image segmentation, feature extraction, and classification over light microscopic images of stained blood films. Accurate and authentic diagnosis of ALL is obtained with the use of improved segmentation methodology, prominent features, and an ensemble classifier, facilitating rapid screening of patients. Experimental results are obtained and compared over the available image data set. It is observed that an ensemble of classifiers leads to 99 % accuracy in comparison with other standard classifiers, i.e., naive Bayesian (NB), K-nearest neighbor (KNN), multilayer perceptron (MLP), radial basis functional network (RBFN), and support vector machines (SVM).", "title": "" }, { "docid": "94bc9736b80c129338fc490e58378504", "text": "Both reverberation and additive noises degrade the speech quality and intelligibility. the weighted prediction error (WPE) performs well on dereverberation but with limitations. First, The WPE doesn’t consider the influence of the additive noise which degrades the performance of dereverberation. Second, it relies on a time-consuming iterative process, and there is no guarantee or a widely accepted criterion on its convergence. In this paper, we integrate deep neural network (DNN) into WPE for dereverberation and denoising. DNN is used to suppress the background noise to meet the noise-free assumption of WPE. Meanwhile, DNN is applied to directly predict spectral variance of the target speech to make the WPE work without iteration. The experimental results show that the proposed method has a significant improvement in speech quality and runs fast.", "title": "" }, { "docid": "b4284204ae7d9ef39091a651583b3450", "text": "Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time t, which summarizes all sensory information available at time t, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.", "title": "" }, { "docid": "b563c69fc65fa8fd8d560aab9d4c20a0", "text": "Individuals who are given a preventive exam by a primary care provider are more likely to agree to cancer screening. The provider recommendation has been identified as the strongest factor associated with screening utilization. This article provides a framework for breast cancer risk assessment for an advanced practice registered nurse working in primary care practice.", "title": "" }, { "docid": "11dcf37ac87629a1c795602b255d10bc", "text": "Deriving the polarity and strength of opinions is an important research topic, attracting significant attention over the last few years. In this work, to measure the strength and polarity of an opinion, we consider the economic context in which the opinion is evaluated, instead of using human annotators or linguistic resources. We rely on the fact that text in on-line systems influences the behavior of humans and this effect can be observed using some easy-to-measure economic variables, such as revenues or product prices. By reversing the logic, we infer the semantic orientation and strength of an opinion by tracing the changes in the associated economic variable. In effect, we use econometrics to identify the “economic value of text” and assign a “dollar value” to each opinion phrase, measuring sentiment effectively and without the need for manual labeling. We argue that by interpreting opinions using econometrics, we have the first objective, quantifiable, and contextsensitive evaluation of opinions. We make the discussion concrete by presenting results on the reputation system of Amazon.com. We show that user feedback affects the pricing power of merchants and by measuring their pricing power we can infer the polarity and strength of the underlying feedback postings.", "title": "" }, { "docid": "4dc20aa2c72a95022ba6cf3b592960a8", "text": "Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.", "title": "" }, { "docid": "21e70e26354a55d35f99c6bcddfc62ca", "text": "Increased focus on JavaScript performance has resulted in vast performance improvements for many benchmarks. However, for actual code used in websites, the attained improvements often lag far behind those for popular benchmarks.\n This paper shows that the main reason behind this short-fall is how the compiler understands types. JavaScript has no concept of types, but the compiler assigns types to objects anyway for ease of code generation. We examine the way that the Chrome V8 compiler defines types, and identify two design decisions that are the main reasons for the lack of improvement: (1) the inherited prototype object is part of the current object's type definition, and (2) method bindings are also part of the type definition. These requirements make types very unpredictable, which hinders type specialization by the compiler. Hence, we modify V8 to remove these requirements, and use it to compile the JavaScript code assembled by JSBench from real websites. On average, we reduce the execution time of JSBench by 36%, and the dynamic instruction count by 49%.", "title": "" } ]
scidocsrr
1aefa6b82c578ef7bb567d885c1dc7c1
Learning Where to Attend with Deep Architectures for Image Tracking
[ { "docid": "065c24bc712f7740b95e0d1a994bfe19", "text": "David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz , CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors . We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine . The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation . We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits.", "title": "" }, { "docid": "944dd53232522155103fc2d1578041dd", "text": "Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model’s estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We propose several portfolio strategies, the best of which we call GP-Hedge, and show that this method outperforms the best individual acquisition function. We also provide a theoretical bound on the algorithm’s performance.", "title": "" }, { "docid": "687ac21bd828ae6d559ef9f55064dec0", "text": "We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments—active user modelling with preferences, and hierarchical reinforcement learning— and a discussion of the pros and cons of Bayesian optimization based on our experiences.", "title": "" } ]
[ { "docid": "e78e70d347fb76a79755442cabe1fbe0", "text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.", "title": "" }, { "docid": "c8ba8d59bb92778921eea146181fa2b8", "text": "MOTIVATION\nProtein interaction networks provide an important system-level view of biological processes. One of the fundamental problems in biological network analysis is the global alignment of a pair of networks, which puts the proteins of one network into correspondence with the proteins of another network in a manner that conserves their interactions while respecting other evidence of their homology. By providing a mapping between the networks of different species, alignments can be used to inform hypotheses about the functions of unannotated proteins, the existence of unobserved interactions, the evolutionary divergence between the two species and the evolution of complexes and pathways.\n\n\nRESULTS\nWe introduce GHOST, a global pairwise network aligner that uses a novel spectral signature to measure topological similarity between subnetworks. It combines a seed-and-extend global alignment phase with a local search procedure and exceeds state-of-the-art performance on several network alignment tasks. We show that the spectral signature used by GHOST is highly discriminative, whereas the alignments it produces are also robust to experimental noise. When compared with other recent approaches, we find that GHOST is able to recover larger and more biologically significant, shared subnetworks between species.\n\n\nAVAILABILITY\nAn efficient and parallelized implementation of GHOST, released under the Apache 2.0 license, is available at http://cbcb.umd.edu/kingsford_group/ghost\n\n\nCONTACT\nrob@cs.umd.edu.", "title": "" }, { "docid": "bd0691351920e8fa74c8197b9a4e91e0", "text": "Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.", "title": "" }, { "docid": "39a072046506be080cbe153b9f6d1c77", "text": "Reverse engineering has many important applications in computer security, one of which is retrofitting software for safety and security hardening when source code is not available. By surveying available commercial and academic reverse engineering tools, we surprisingly found that no existing tool is able to disassemble executable binaries into assembly code that can be correctly assembled back in a fully automated manner, even for simple programs. Actually in many cases, the resulted disassembled code is far from a state that an assembler accepts, which is hard to fix even by manual effort. This has become a severe obstacle. People have tried to overcome it by patching or duplicating new code sections for retrofitting of executables, which is not only inefficient but also cumbersome and restrictive on what retrofitting techniques can be applied to. In this paper, we present UROBOROS, a tool that can disassemble executables to the extent that the generated code can be assembled back to working binaries without manual effort. By empirically studying 244 binaries, we summarize a set of rules that can make the disassembled code relocatable, which is the key to reassembleable disassembling. With UROBOROS, the disassembly-reassembly process can be repeated thousands of times. We have implemented a prototype of UROBOROS and tested over the whole set of GNU Coreutils, SPEC2006, and a set of other real-world application and server programs. The experiment results show that our tool is effective with a very modest cost.", "title": "" }, { "docid": "13a9329bdd46ba243003090bf219a20a", "text": "Visual art represents a powerful resource for mental and physical well-being. However, little is known about the underlying effects at a neural level. A critical question is whether visual art production and cognitive art evaluation may have different effects on the functional interplay of the brain's default mode network (DMN). We used fMRI to investigate the DMN of a non-clinical sample of 28 post-retirement adults (63.71 years ±3.52 SD) before (T0) and after (T1) weekly participation in two different 10-week-long art interventions. Participants were randomly assigned to groups stratified by gender and age. In the visual art production group 14 participants actively produced art in an art class. In the cognitive art evaluation group 14 participants cognitively evaluated artwork at a museum. The DMN of both groups was identified by using a seed voxel correlation analysis (SCA) in the posterior cingulated cortex (PCC/preCUN). An analysis of covariance (ANCOVA) was employed to relate fMRI data to psychological resilience which was measured with the brief German counterpart of the Resilience Scale (RS-11). We observed that the visual art production group showed greater spatial improvement in functional connectivity of PCC/preCUN to the frontal and parietal cortices from T0 to T1 than the cognitive art evaluation group. Moreover, the functional connectivity in the visual art production group was related to psychological resilience (i.e., stress resistance) at T1. Our findings are the first to demonstrate the neural effects of visual art production on psychological resilience in adulthood.", "title": "" }, { "docid": "39392798c76bd7d8e9fe089edc8cfe6a", "text": "Wearable haptic devices with poor position sensing are combined with the Kinect depth sensor by Microsoft. A heuristic hand tracker has been developed. It allows for the animation of the hand avatar in the virtual reality and the implementation of the force rendering algorithm: the position of the fingertips is measured by the hand tracker designed and optimized for Kinect, and the rendering algorithm computes the contact forces for wearable haptic display. Preliminary experiments with qualitative results show the effectiveness of the idea of combining Kinect and wearable haptics.", "title": "" }, { "docid": "d57491b0ba1e68597ce2937534983c92", "text": "Inspired by the natural features of the variable size of the population, we present a variable population-size genetic (VPGA) by introducing the “dying probab ility” for the i ndividuals and the “war/disease pro cess” for the population. Based o the VPGA and the particle swarm optimization (PSO) algor ithms, a novel PSO-GA-based hybrid algorithm (PGHA) is a proposed in this paper. Simulation results show that both VPGA and PGHA are effective for the optimization problems  2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "70e77a493b8b4eb0052218757d680623", "text": "Coverage Path Planning (CPP) describes the process of generating robot trajectories that fully cover an area or volume. Applications are, amongst many others, mobile cleaning robots, lawn mowing robots or harvesting machines in agriculture. Many approaches and facets of this problem have been discussed in literature but despite the availability of several surveys on the topic there is little work on quantitative assessment and comparison of different coverage path planning algorithms. This paper analyzes six popular off-line coverage path planning methods, applicable to previously recorded maps, in the setting of indoor coverage path planning on room-sized units. The implemented algorithms are thoroughly compared on a large dataset of over 550 rooms with and without furniture.", "title": "" }, { "docid": "a84d2de19a34b914e583c9f4379b68da", "text": "English) xx Abstract(Arabic) xxiiArabic) xxii", "title": "" }, { "docid": "8020c67dd790bcff7aea0e103ea672f1", "text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth", "title": "" }, { "docid": "7259530c42f4ba91155284ce909d25a6", "text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.", "title": "" }, { "docid": "47aee90be18e5f2b906d97c67f6016e7", "text": "Embedded VLC (Visible Light Communication) has attracted significant research attention in recent years. A reliable and robust VLC system can become one of the IoT communication technologies for indoor environment. VLC could become a wireless technology complementary to existing RF-based technology but with no RF interference. However, existing low cost LED based VLC platforms have limited throughput and reliability. In this work, we introduce Purple VLC: a new embedded VLC platform that can achieve 100 kbps aggregate throughput at a distance of 6 meters, which is 6-7x improvement over state-of-the-art. Our design combines I/O offloading in computation, concurrent communication with polarized light, and full-duplexing to offer more than 99% link reliability at a distance of 6 meters.", "title": "" }, { "docid": "9519c76e5868bc59d2725f0ce603fa3d", "text": "Video-based person re-identification plays a central role in realistic security and video surveillance. In this paper, we propose a novel accumulative motion context (AMOC) network for addressing this important problem, which effectively exploits the long-range motion context for robustly identifying the same person under challenging conditions. Given a video sequence of the same or different persons, the proposed AMOC network jointly learns appearance representation and motion context from a collection of adjacent frames using a two-stream convolutional architecture. Then, AMOC accumulates clues from motion context by recurrent aggregation, allowing effective information flow among adjacent frames and capturing dynamic gist of the persons. The architecture of AMOC is end-to-end trainable, and thus, motion context can be adapted to complement appearance clues under unfavorable conditions (e.g., occlusions). Extensive experiments are conduced on three public benchmark data sets, i.e., the iLIDS-VID, PRID-2011, and MARS data sets, to investigate the performance of AMOC. The experimental results demonstrate that the proposed AMOC network outperforms state-of-the-arts for video-based re-identification significantly and confirm the advantage of exploiting long-range motion context for video-based person re-identification, validating our motivation evidently.", "title": "" }, { "docid": "fcd25f888ad7fb695945208ab4909086", "text": "Artificial Intelligence (AI) has been studied for decades and is still one of the most elusive subjects in Computer Science. This partly due to how large and nebulous the subject is. AI ranges from machines truly capable of thinking to search algorithms used to play board games. It has applications in nearly every way we use computers in society. This paper is about examining the history of artificial intelligence from theory to practice and from its rise to fall, highlighting a few major themes and advances. The term artificial intelligence was first coined by John McCarthy in 1956 when he held the first academic conference on the subject. But the journey to understand if machines can truly think began much before that. In Vannevar Bush's seminal work As We May Think [Bush45] he proposed a system which amplifies people's own knowledge and understanding. Five years later Alan Turing wrote a paper on the notion of machines being able to simulate human beings and the ability to do intelligent things, such as play Chess [Turing50]. No one can refute a computer's ability to process logic. But to many it is unknown if a machine can think. The precise definition of think is important because there has been some strong opposition as to whether or not this notion is even possible. For example, there is the so-called 'Chinese room' argument [Searle80]. Imagine someone is locked in a room, where they were passed notes in Chinese. Using an entire library of rules and look-up tables they would be able to produce valid responses in Chinese, but would they really 'understand' the language? The argument is that since computers would always be applying rote fact lookup they could never 'understand' a subject. This argument has been refuted in numerous ways by researchers, but it does undermine people's faith in machines and so-called expert systems in life-critical applications. The main advances over the past sixty years have been advances in search algorithms, machine learning algorithms, and integrating statistical analysis into understanding the world at large. However most of the breakthroughs in AI aren't noticeable to most people. Rather than talking machines used to pilot space ships to Jupiter, AI is used in more subtle ways such as examining purchase histories and influence marketing decisions [Shaw01]. What most people think of as 'true AI' hasn't experienced rapid progress over the decades. A common theme in the …", "title": "" }, { "docid": "2c4c7f8dcf1681e278183525d520fc8c", "text": "In the course of studies on the isolation of bioactive compounds from Philippine plants, the seeds of Moringa oleifera Lam. were examined and from the ethanol extract were isolated the new O-ethyl-4-(alpha-L-rhamnosyloxy)benzyl carbamate (1) together with seven known compounds, 4(alpha-L-rhamnosyloxy)-benzyl isothiocyanate (2), niazimicin (3), niazirin (4), beta-sitosterol (5), glycerol-1-(9-octadecanoate) (6), 3-O-(6'-O-oleoyl-beta-D-glucopyranosyl)-beta-sitosterol (7), and beta-sitosterol-3-O-beta-D-glucopyranoside (8). Four of the isolates (2, 3, 7, and 8), which were obtained in relatively good yields, were tested for their potential antitumor promoting activity using an in vitro assay which tested their inhibitory effects on Epstein-Barr virus-early antigen (EBV-EA) activation in Raji cells induced by the tumor promoter, 12-O-tetradecanoyl-phorbol-13-acetate (TPA). All the tested compounds showed inhibitory activity against EBV-EA activation, with compounds 2, 3 and 8 having shown very significant activities. Based on the in vitro results, niazimicin (3) was further subjected to in vivo test and found to have potent antitumor promoting activity in the two-stage carcinogenesis in mouse skin using 7,12-dimethylbenz(a)anthracene (DMBA) as initiator and TPA as tumor promoter. From these results, niazimicin (3) is proposed to be a potent chemo-preventive agent in chemical carcinogenesis.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "0b52e4be9b45d109e13750f522aa84a3", "text": "This dissertation presents, discusses, and sheds some light on the problems that appear when computers try to automatically classify musical genres from audio signals. In particular, a method is proposed for the automatic music genre classification by using a computational approach that is inspired in music cognition and musicology in addition to Music Information Retrieval techniques. In this context, we design a set of experiments by combining the different elements that may affect the accuracy in the classification (audio descriptors, machine learning algorithms, etc.). We evaluate, compare and analyze the obtained results in order to explain the existing glass-ceiling in genre classification, and propose new strategies to overcome it. Moreover, starting from the polyphonic audio content processing we include musical and cultural aspects of musical genre that have usually been neglected in the current state of the art approaches. This work studies different families of audio descriptors related to timbre, rhythm, tonality and other facets of music, which have not been frequently addressed in the literature. Some of these descriptors are proposed by the author and others come from previous existing studies. We also compare machine learning techniques commonly used for classification and analyze how they can deal with the genre classification problem. We also present a discussion on their ability to represent the different classification models proposed in cognitive science. Moreover, the classification results using the machine learning techniques are contrasted with the results of some listening experiments proposed. This comparison drive us to think of a specific architecture of classifiers that will be justified and described in detail. It is also one of the objectives of this dissertation to compare results under different data configurations, that is, using different datasets, mixing them and reproducing some real scenarios in which genre classifiers could be used (huge datasets). As a conclusion, we discuss how the classification architecture here proposed can break the existing glass-ceiling effect in automatic genre classification. To sum up, this dissertation contributes to the field of automatic genre classification: a) It provides a multidisciplinary review of musical genres and its classification; b) It provides a qualitative and quantitative evaluation of families of audio descriptors used for automatic classification; c) It evaluates different machine learning techniques and their pros and cons in the context of genre classification; d) It proposes a new architecture of classifiers after analyzing music genre classification from different disciplines; e) It analyzes the behavior of this proposed architecture in different environments consisting of huge or mixed datasets.", "title": "" }, { "docid": "29b7a8f450c46e87c7c3a5c60291dba6", "text": "The purpose of this investigation was to determine whether force platform measurements can be used to objectively assess short-term effects of spinal manipulation on patients with diagnosed, chronic unilateral \"sacroiliac dyskinesia,\" here defined as decreased interarticular mobility of the sacroiliac joint. Nine patients walked across a force platform, were than manipulated by a chiropractor and then repeated the gait trials. Temporal and kinetic gait variables from the force platform measurements were analyzed for changes in the symmetry of the subjects' gait before and after treatment sessions. There was a distinct tendency towards improved gait symmetry after treatment in those cases where the gait was asymmetric prior to the treatment. This result indicated that force platform measurements may be used successfully to assess the effects of spinal manipulations on the gait of patients with sacroiliac dyskinesia.", "title": "" }, { "docid": "a6ed725fb7325eaeab50d0c9a7741cb4", "text": "Plant-microbe associations are thought to be beneficial for plant growth and resistance against biotic or abiotic stresses, but for natural ecosystems, the ecological analysis of microbiome function remains in its infancy. We used transformed wild tobacco plants (Nicotiana attenuata) which constitutively express an antimicrobial peptide (Mc-AMP1) of the common ice plant, to establish an ecological tool for plant-microbe studies in the field. Transgenic plants showed in planta activity against plant-beneficial bacteria and were phenotyped within the plants´ natural habitat regarding growth, fitness and the resistance against herbivores. Multiple field experiments, conducted over 3 years, indicated no differences compared to isogenic controls. Pyrosequencing analysis of the root-associated microbial communities showed no major alterations but marginal effects at the genus level. Experimental infiltrations revealed a high heterogeneity in peptide tolerance among native isolates and suggests that the diversity of natural microbial communities can be a major obstacle for microbiome manipulations in nature.", "title": "" }, { "docid": "7e7fc57baab9f8be5032ce71529603d1", "text": "Many companies are now providing customer service through social media, helping and engaging their customers on a real-time basis. To study this increasingly popular practice, we examine how major airlines respond to customer comments on Twitter by exploiting a large data set containing all Twitter exchanges between customers and four major airlines from June 2013 to August 2014. We find that these airlines pay significantly more attention to Twitter users with more followers, suggesting that companies literarily discriminate customers based on their social influence. Moreover, our findings suggest that companies in the digital age are increasingly more sensitive to the need to answer both customer complaints and customer compliments.", "title": "" } ]
scidocsrr